The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?
Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.
2018-01-01
The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…
Directory of Open Access Journals (Sweden)
Boldachev Alexander
2015-01-01
Full Text Available Any low-level processes, the sequence of chemical interactions in a living cell, muscle cellular activity, processor commands or neuron interaction, is possible only if there is a downward causality, only due to uniting and controlling power of the highest level. Therefore, there is no special “hard problem of consciousness”, i.e. the problem of relation of ostensibly purely biological materiality and non-causal mentality - we have only the single philosophical problem of relation between the upward and downward causalities, the problem of interrelation between hierarchic levels of existence. It is necessary to conclude that the problem of determinacy of chemical processes by the biological ones and the problem of neuron interactions caused by consciousness are of one nature and must have one solution.
The hard problem of cooperation.
Directory of Open Access Journals (Sweden)
Kimmo Eriksson
Full Text Available Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.
The hard problem of cooperation.
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.
Energy Technology Data Exchange (ETDEWEB)
Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V. [National Academy of Sciences of the Republic of Armenia, Institute for Informatics and Automation Problems (Armenia)
2017-03-15
We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.
Statistical physics of hard optimization problems
International Nuclear Information System (INIS)
Zdeborova, L.
2009-01-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfy ability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named ”locked” constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfy ability.
Statistical physics of hard optimization problems
International Nuclear Information System (INIS)
Zdeborova, L.
2009-01-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)
Statistical physics of hard optimization problems
Zdeborová, Lenka
2009-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Box, Simon
2014-12-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.
An overview on polynomial approximation of NP-hard problems
Directory of Open Access Journals (Sweden)
Paschos Vangelis Th.
2009-01-01
Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.
A further problem of the hard problem of consciousness | Gbenga ...
African Journals Online (AJOL)
Justifying this assertion is identified as the further problem of the hard problem of consciousness. This shows that assertions about phenomenal properties of mental experiences are wholly epistemological. Hence, the problem of explaining phenomenal properties of a mental state is not a metaphysical problem, and what is ...
Heuristics for NP-hard optimization problems - simpler is better!?
Directory of Open Access Journals (Sweden)
Žerovnik Janez
2015-11-01
Full Text Available We provide several examples showing that local search, the most basic metaheuristics, may be a very competitive choice for solving computationally hard optimization problems. In addition, generation of starting solutions by greedy heuristics should be at least considered as one of very natural possibilities. In this critical survey, selected examples discussed include the traveling salesman, the resource-constrained project scheduling, the channel assignment, and computation of bounds for the Shannon capacity.
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning
NP-hardness of the cluster minimization problem revisited
Adib, Artur B.
2005-10-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
NP-hardness of the cluster minimization problem revisited
International Nuclear Information System (INIS)
Adib, Artur B
2005-01-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested
NP-hardness of the cluster minimization problem revisited
Energy Technology Data Exchange (ETDEWEB)
Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)
2005-10-07
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
Computational problems in engineering
Mladenov, Valeri
2014-01-01
This book provides readers with modern computational techniques for solving variety of problems from electrical, mechanical, civil and chemical engineering. Mathematical methods are presented in a unified manner, so they can be applied consistently to problems in applied electromagnetics, strength of materials, fluid mechanics, heat and mass transfer, environmental engineering, biomedical engineering, signal processing, automatic control and more. • Features contributions from distinguished researchers on significant aspects of current numerical methods and computational mathematics; • Presents actual results and innovative methods that provide numerical solutions, while minimizing computing times; • Includes new and advanced methods and modern variations of known techniques that can solve difficult scientific problems efficiently.
Constraint satisfaction problems with isolated solutions are hard
International Nuclear Information System (INIS)
Zdeborová, Lenka; Mézard, Marc
2008-01-01
We study the phase diagram and the algorithmic hardness of the random 'locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of Boolean formulae or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand, we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems
Computational Modeling Develops Ultra-Hard Steel
2007-01-01
Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.
Locating phase transitions in computationally hard problems
Indian Academy of Sciences (India)
New applications of statistical mechanics; analysis of algorithms; heuristics; phase transitions and critical ...... KGaA, Weinheim, 2005). [12] S Zilberstein, AI Magazine 17, 73 (1996) ... versity Press Inc., New York, 1971). [17] F Baras, G Nicolis, ...
Rad-hard embedded computers for nuclear robotics
International Nuclear Information System (INIS)
Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.
1993-01-01
For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs
[Computer-assisted phacoemulsification for hard cataracts].
Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina
2012-01-01
to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.
Micro-computer cards for hard industrial environment
Energy Technology Data Exchange (ETDEWEB)
Breton, J M
1984-03-15
Approximately 60% of present or future distributed systems have, or will have, operational units installed in hard environments. In these applications, which include canalization and industrial motor control, robotics and process control, systems must be easily applied in environments not made for electronic use. The development of card systems in this hard industrial environment, which is found in petrochemical industry and mines is described. National semiconductor CIM card system CMOS technology allows the real time micro computer application to be efficient and functional in hard industrial environments.
Structural qualia: a solution to the hard problem of consciousness.
Loorits, Kristjan
2014-01-01
The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.
Structural qualia: a solution to the hard problem of consciousness
Directory of Open Access Journals (Sweden)
Kristjan eLoorits
2014-03-01
Full Text Available The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.
A Comparison of Approaches for Solving Hard Graph-Theoretic Problems
2015-04-29
and Search”, in Discrete Mathematics and Its Applications, Book 7, CRC Press (1998): Boca Raton. [6] A. Lucas, “Ising Formulations of Many NP Problems...owner. 14. ABSTRACT In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many... combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a
New Unconditional Hardness Results for Dynamic and Online Problems
DEFF Research Database (Denmark)
Clifford, Raphaël; Jørgensen, Allan Grønlund; Larsen, Kasper Green
2015-01-01
Data summarization is an effective approach to dealing with the 'big data' problem. While data summarization problems traditionally have been studied is the streaming model, the focus is starting to shift to distributed models, as distributed/parallel computation seems to be the only viable way...... to handle today's massive data sets. In this paper, we study ε-approximations, a classical data summary that, intuitively speaking, preserves approximately the density of the underlying data set over a certain range space. We consider the problem of computing ε-approximations for a data set which is held...
Polyomino Problems to Confuse Computers
Coffin, Stewart
2009-01-01
Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…
Statistical physics of hard combinatorial optimization: Vertex cover problem
Zhao, Jin-Hua; Zhou, Hai-Jun
2014-07-01
Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.
Nuclear many-body problem with repulsive hard core interactions
Energy Technology Data Exchange (ETDEWEB)
Haddad, L M
1965-07-01
The nuclear many-body problem is considered using the perturbation-theoretic approach of Brueckner and collaborators. This approach is outlined with particular attention paid to the graphical representation of the terms in the perturbation expansion. The problem is transformed to centre-of-mass coordinates in configuration space and difficulties involved in ordinary methods of solution of the resulting equation are discussed. A new technique, the 'reference spectrum method', devised by Bethe, Brandow and Petschek in an attempt to simplify the numerical work in presented. The basic equations are derived in this approximation and considering the repulsive hard core part of the interaction only, the effective mass is calculated at high momentum (using the same energy spectrum for both 'particle' and 'hole' states). The result of 0.87m is in agreement with that of Bethe et al. A more complete treatment using the reference spectrum method in introduced and a self-consistent set of equations is established for the reference spectrum parameters again for the case of hard core repulsions. (author)
Solving computationally expensive engineering problems
Leifsson, Leifur; Yang, Xin-She
2014-01-01
Computational complexity is a serious bottleneck for the design process in virtually any engineering area. While migration from prototyping and experimental-based design validation to verification using computer simulation models is inevitable and has a number of advantages, high computational costs of accurate, high-fidelity simulations can be a major issue that slows down the development of computer-aided design methodologies, particularly those exploiting automated design improvement procedures, e.g., numerical optimization. The continuous increase of available computational resources does not always translate into shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. Accurate simulation of a single design of a given system may be as long as several hours, days or even weeks, which often makes design automation using conventional methods impractical or even prohibitive. Additional problems include numerical noise often pr...
Security Problems in Cloud Computing
Directory of Open Access Journals (Sweden)
Rola Motawie
2016-12-01
Full Text Available Cloud is a pool of computing resources which are distributed among cloud users. Cloud computing has many benefits like scalability, flexibility, cost savings, reliability, maintenance and mobile accessibility. Since cloud-computing technology is growing day by day, it comes with many security problems. Securing the data in the cloud environment is most critical challenges which act as a barrier when implementing the cloud. There are many new concepts that cloud introduces, such as resource sharing, multi-tenancy, and outsourcing, create new challenges for the security community. In this work, we provide a comparable study of cloud computing privacy and security concerns. We identify and classify known security threats, cloud vulnerabilities, and attacks.
Deaf and hard of hearing students' problem-solving strategies with signed arithmetic story problems.
Pagliaro, Claudia M; Ansell, Ellen
2012-01-01
The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e., modeling, counting, and fact-based strategies), they showed an overwhelming use of counting strategies for all types of problems and at all ages. This difference may have its roots in language or instruction (or in both), and calls attention to the need for conceptual rather than procedural mathematics instruction for deaf and hard of hearing students.
Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.
Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G
2017-02-17
Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
International Nuclear Information System (INIS)
Viveros-Méndez, P. X.; Aranda-Espinoza, S.; Gil-Villegas, Alejandro
2014-01-01
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e 2 /Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L x ≈ L y and L z = 5L x , where L x , L y , and L z are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface
Excavation Technology for Hard Rock - Problems and Prospects
International Nuclear Information System (INIS)
Gillani, S.T.A.; Butt, N.
2009-01-01
Civil engineering projects have greatly benefited from the mechanical excavation of hard rock technology. Mining industry, on the other hand, is still searching for major breakthroughs to mechanize and then automate the winning of ore and drivage of access tunnels in its metalliferous sector. The aim of this study is to extend the scope of drag bits for road headers in hard rock cutting. Various factors that can impose limitations on the potential applications of drag bits in hard rock mining are investigated and discussed along with alternative technology options. (author)
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
Energy Technology Data Exchange (ETDEWEB)
Viveros-Méndez, P. X., E-mail: xviveros@fisica.uaz.edu.mx; Aranda-Espinoza, S. [Unidad Académica de Física, Universidad Autónoma de Zacatecas, Calzada Solidaridad esq. Paseo, La Bufa s/n, 98060 Zacatecas, Zacatecas, México (Mexico); Gil-Villegas, Alejandro [Departamento de Ingeniería Física, División de Ciencias e Ingenierías, Campus León, Universidad de Guanajuato, Loma del Bosque 103, Lomas del Campestre, 37150 León, Guanajuato, México (Mexico)
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Quantum Computing's Classical Problem, Classical Computing's Quantum Problem
Van Meter, Rodney
2013-01-01
Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...
International Nuclear Information System (INIS)
Spencer, VN
2001-01-01
An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran
Structured pigeonhole principle, search problems and hard tautologies
Czech Academy of Sciences Publication Activity Database
Krajíček, Jan
2005-01-01
Roč. 70, č. 2 (2005), s. 619-630 ISSN 0022-4812 R&D Projects: GA AV ČR(CZ) IAA1019401; GA MŠk(CZ) LN00A056 Institutional research plan: CEZ:AV0Z10190503 Keywords : proof complexity * pigeonhole principle * serch problems Subject RIV: BA - General Mathematics Impact factor: 0.470, year: 2005
Complex network problems in physics, computer science and biology
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe
Computational problems in Arctic Research
International Nuclear Information System (INIS)
Petrov, I
2016-01-01
This article is to inform about main problems in the area of Arctic shelf seismic prospecting and exploitation of the Northern Sea Route: simulation of the interaction of different ice formations (icebergs, hummocks, and drifting ice floes) with fixed ice-resistant platforms; simulation of the interaction of icebreakers and ice- class vessels with ice formations; modeling of the impact of the ice formations on the underground pipelines; neutralization of damage for fixed and mobile offshore industrial structures from ice formations; calculation of the strength of the ground pipelines; transportation of hydrocarbons by pipeline; the problem of migration of large ice formations; modeling of the formation of ice hummocks on ice-resistant stationary platform; calculation the stability of fixed platforms; calculation dynamic processes in the water and air of the Arctic with the processing of data and its use to predict the dynamics of ice conditions; simulation of the formation of large icebergs, hummocks, large ice platforms; calculation of ridging in the dynamics of sea ice; direct and inverse problems of seismic prospecting in the Arctic; direct and inverse problems of electromagnetic prospecting of the Arctic. All these problems could be solved by up-to-date numerical methods, for example, using grid-characteristic method. (paper)
Deaf and Hard of Hearing Students' Problem-Solving Strategies with Signed Arithmetic Story Problems
Pagliaro, Claudia M.; Ansell, Ellen
2011-01-01
The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e.,…
Gebremedhin, Daniel H; Weatherford, Charles A
2015-02-01
This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.
Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M
2016-01-01
The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.
The Czech base of hard coal, problems, possibilities for utilization
International Nuclear Information System (INIS)
Cermak, T.; Roubicek, V.
1993-01-01
The Czech coal and power engineering base is in a deep restructuring period now. The basic problems represents the changeover from the system of the centrally planned state economy to the market model of the energy resources mining, production and consumption. The Czech economy will have to face to up to now unknown competitive forces on the coal market in Europe where American, Canadian, Australian and South African coals compete. The paper discusses historical aspects of the development of the coal mining industry in the Czechoslavakia, the present coal preparation techniques for coking coals, the coking industry, and the utilization of brown coal. How to utilize the domestic coal base and coal generally is closely connected with the global restructuralization of the Czech economy. The most difficult step of this process is undoubtedly the adaptation of the Czech fuel and energy base to the market economy conditions
Rad-hard embedded computers for nuclear robotics
International Nuclear Information System (INIS)
Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.
1994-01-01
Nuclear industries require robots with embedded rad hard electronics and high reliability. The SYROCO research program allowed to perform efficient industrial prototypes, build according to MICADO architecture, and to design CADMOS architecture. MICADO architecture uses the auto healing property that have CMOS circuits when being switched off during irradiation. (D.L.). 8 refs., 5 figs
The Impact of Hard Disk Firmware Steganography on Computer Forensics
Directory of Open Access Journals (Sweden)
Iain Sutherland
2009-06-01
Full Text Available The hard disk drive is probably the predominant form of storage media and is a primary data source in a forensic investigation. The majority of available software tools and literature relating to the investigation of the structure and content contained within a hard disk drive concerns the extraction and analysis of evidence from the various file systems which can reside in the user accessible area of the disk. It is known that there are other areas of the hard disk drive which could be used to conceal information, such as the Host Protected Area and the Device Configuration Overlay. There are recommended methods for the detection and forensic analysis of these areas using appropriate tools and techniques. However, there are additional areas of a disk that have currently been overlooked.Â The Service Area or Platter Resident Firmware Area is used to store code and control structures responsible for the functionality of the drive and for logging failing or failed sectors.This paper provides an introduction into initial research into the investigation and identification of issues relating to the analysis of the Platter Resident Firmware Area. In particular, the possibility that the Platter Resident Firmware Area could be manipulated and exploited to facilitate a form of steganography, enabling information to be concealed by a user and potentially from a digital forensic investigator.
Computational Study on a PTAS for Planar Dominating Set Problem
Directory of Open Access Journals (Sweden)
Qian-Ping Gu
2013-01-01
Full Text Available The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn time, c is a constant, (1+1/k-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k. Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k in O(22ckn time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.
Energy Technology Data Exchange (ETDEWEB)
Giraud, A; Joffre, F; Marceau, M; Robiolle, M; Brunet, J P; Mijuin, D
1994-12-31
For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs.
Smith, Glenn Gordon
2012-01-01
This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…
Problems in education, employment and social integration of hard of hearing artists
Directory of Open Access Journals (Sweden)
Radić-Šestić Marina
2013-01-01
Full Text Available The aim of this research was to determine the problems in education (primary, secondary and undergraduate academic studies, employment and social integration of hard of hearing artists based on a multiple case study. The sample consisted of 4 examinees of both genders, aged between 29 and 54, from the field of visual arts (a painter, a sculptor, a graphic designer, and an interior designer. The structured interview consisted of 30 questions testing three areas: the first area involved family, primary and secondary education; the second area was about the length of studying and socio-emotional problems of the examinees; the third area dealt with problems in employment and job satisfaction of our examinees. Research results indicate the existence of several problems which more or less reflect the success in education, employment and social integration of hard of hearing artists. One of the problems which can influence the development of language abilities, socioemotional maturity, and better educational achievement of hard of hearing artists in general, is prolongation in diagnosing hearing impairments, amplification and auditory rehabilitation. Furthermore, parents of hard of hearing artists have difficulties in adjusting to their children's hearing impairments and ignore the language and culture of the Deaf, i.e. they tend to identify their children with typically developing population. Another problem are negative attitudes of teachers/professors/employers and typically developing peers/ colleagues towards the inclusion of hard of hearing people into the regular education/employment system. Apart from that, unmodified instruction, course books, information, school and working area further complicate the acquisition of knowledge, information, and the progress of hard of hearing people in education and profession.
Using Computer Simulations in Chemistry Problem Solving
Avramiotis, Spyridon; Tsaparlis, Georgios
2013-01-01
This study is concerned with the effects of computer simulations of two novel chemistry problems on the problem solving ability of students. A control-experimental group, equalized by pair groups (n[subscript Exp] = n[subscript Ctrl] = 78), research design was used. The students had no previous experience of chemical practical work. Student…
Computational problems in science and engineering
Bulucea, Aida; Tsekouras, George
2015-01-01
This book provides readers with modern computational techniques for solving variety of problems from electrical, mechanical, civil and chemical engineering. Mathematical methods are presented in a unified manner, so they can be applied consistently to problems in applied electromagnetics, strength of materials, fluid mechanics, heat and mass transfer, environmental engineering, biomedical engineering, signal processing, automatic control and more.
Solving the 0/1 Knapsack Problem by a Biomolecular DNA Computer
Directory of Open Access Journals (Sweden)
Hassan Taghipour
2013-01-01
Full Text Available Solving some mathematical problems such as NP-complete problems by conventional silicon-based computers is problematic and takes so long time. DNA computing is an alternative method of computing which uses DNA molecules for computing purposes. DNA computers have massive degrees of parallel processing capability. The massive parallel processing characteristic of DNA computers is of particular interest in solving NP-complete and hard combinatorial problems. NP-complete problems such as knapsack problem and other hard combinatorial problems can be easily solved by DNA computers in a very short period of time comparing to conventional silicon-based computers. Sticker-based DNA computing is one of the methods of DNA computing. In this paper, the sticker based DNA computing was used for solving the 0/1 knapsack problem. At first, a biomolecular solution space was constructed by using appropriate DNA memory complexes. Then, by the application of a sticker-based parallel algorithm using biological operations, knapsack problem was resolved in polynomial time.
Second benchmark problem for WIPP structural computations
International Nuclear Information System (INIS)
Krieg, R.D.; Morgan, H.S.; Hunter, T.O.
1980-12-01
This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications
Problems in accounting for the soft and hard components in transverse energy triggers
International Nuclear Information System (INIS)
Anjos, J.C.; Santoro, A.F.S.; Souza, M.H.G.; Escobar, C.O.
1983-01-01
It is argued that for a transverse energy trigger, the cancellation theorem of DeTar, Ellis and Landshoff is not valid. As a consequence, the problem of accounting for soft and hard components in this kind of trigger becomes complicated and no simple separation between them is expected. (Author) [pt
Date Sensitive Computing Problems: Understanding the Threat
1998-08-29
equipment on Earth.3 It can also interfere with electromagnetic signals from such devices as cell phones, radio, televison , and radar. By itself, the ...spacecraft. Debris from impacted satellites will add to the existing orbital debris problem, and could eventually cause damage to other satellites...Date Sensitive Computing Problems Understanding the Threat Aug. 17, 1998 Revised Aug. 29, 1998 Prepared by: The National Crisis Response
Ocular problems of computer vision syndrome: Review
Directory of Open Access Journals (Sweden)
Ayakutty Muni Raja
2015-01-01
Full Text Available Nowadays, ophthalmologists are facing a new group of patients having eye problems related to prolonged and excessive computer use. When the demand for near work exceeds the normal ability of the eye to perform the job comfortably, one develops discomfort and prolonged exposure, which leads to a cascade of reactions that can be put together as computer vision syndrome (CVS. In India, the computer-using population is more than 40 million, and 80% have discomfort due to CVS. Eye strain, headache, blurring of vision and dryness are the most common symptoms. Workstation modification, voluntary blinking, adjustment of the brightness of screen and breaks in between can reduce CVS.
Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems
DEFF Research Database (Denmark)
Taslaman, Nina Sofia
of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions. The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements. The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...
Computer Hacking as a Social Problem
Alleyne, Brian
2018-01-01
This chapter introduces the ideas and practices of digital technology enthusiasts who fall under the umbrella of “hackers. “We will discuss how their defining activity has been constructed as a social problem and how that construction has been challenged in different ways. The chapter concludes with several policy suggestions aimed at addressing the more problematic aspects of computer hacking.
Computer Security: better code, fewer problems
Stefan Lueders, Computer Security Team
2016-01-01
The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten. The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected. Not validating that input: you expect a birth date? So why accept letters? &...
AI tools in computer based problem solving
Beane, Arthur J.
1988-01-01
The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.
Molecular computation: RNA solutions to chess problems.
Faulhammer, D; Cukras, A R; Lipton, R J; Landweber, L F
2000-02-15
We have expanded the field of "DNA computers" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the "Knight problem," which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine "bits" or placeholders in a combinatorial RNA library. We recovered a set of "winning" molecules that describe solutions to this problem.
Greedy and metaheuristics for the offline scheduling problem in grid computing
DEFF Research Database (Denmark)
Gamst, Mette
In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....
International Nuclear Information System (INIS)
Eichhorn, E.; Gerber, V.; Schreyer, P.
1995-01-01
(1) Employment of those radiation hard electronics which are already known in military and space applications. (2) The experience in space-flight shall be used to investigate nuclear technology areas, for example, by using space electronics to prove the range of applications in nuclear radiating environments. (3) Reproduction of a computer developed for telecommunication satellites; proof of radiation hardness by radiation tests. (4) At 328 Krad (Si) first failure of radiation tolerant devices with 100 Krad (Si) hardness guaranteed. (5) Using radiation hard devices of the same type you can expect applications at doses of greater than 1 Mrad (Si). Electronic systems applicable for radiation categories D, C and lower part of B for manipulators, vehicles, underwater robotics. (orig.) [de
The traveling salesman problem a computational study
Applegate, David L; Chvatal, Vasek; Cook, William J
2006-01-01
This book presents the latest findings on one of the most intensely investigated subjects in computational mathematics--the traveling salesman problem. It sounds simple enough: given a set of cities and the cost of travel between each pair of them, the problem challenges you to find the cheapest route by which to visit all the cities and return home to where you began. Though seemingly modest, this exercise has inspired studies by mathematicians, chemists, and physicists. Teachers use it in the classroom. It has practical applications in genetics, telecommunications, and neuroscience.
Computer simulation of solid-liquid coexistence in binary hard sphere mixtures
Kranendonk, W.G.T.; Frenkel, D.
1991-01-01
We present the results of a computer simulation study of the solid-liquid coexistence of a binary hard sphere mixture for diameter ratios in the range 0·85 ⩽ ğa ⩽ 1>·00. For the solid phase we only consider substitutionally disordered FCC and HCP crystals. For 0·9425 < α < 1·00 we find a
Disposal of waste computer hard disk drive: data destruction and resources recycling.
Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming
2013-06-01
An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.
Computational search for rare-earth free hard-magnetic materials
Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team
2015-03-01
It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).
Anatomy of safety-critical computing problems
International Nuclear Information System (INIS)
Swu Yih; Fan Chinfeng; Shirazi, Behrooz
1995-01-01
This paper analyzes the obstacles faced by current safety-critical computing applications. The major problem lies in the difficulty to provide complete and convincing safety evidence to prove that the software is safe. We explain this problem from a fundamental perspective by analyzing the essence of safety analysis against that of software developed by current practice. Our basic belief is that in order to perform a successful safety analysis, the state space structure of the analyzed system must have some properties as prerequisites. We propose the concept of safety analyzability, and derive its necessary and sufficient conditions; namely, definability, finiteness, commensurability, and tractability. We then examine software state space structures against these conditions, and affirm that the safety analyzability of safety-critical software developed by current practice is severely restricted by its state space structure and by the problem of exponential growth cost. Thus, except for small and simple systems, the safety evidence may not be complete and convincing. Our concepts and arguments successfully explain the current problematic situation faced by the safety-critical computing domain. The implications are also discussed
Computational Complexity of Some Problems on Generalized Cellular Automations
Directory of Open Access Journals (Sweden)
P. G. Klyucharev
2012-03-01
Full Text Available We prove that the preimage problem of a generalized cellular automation is NP-hard. The results of this work are important for supporting the security of the ciphers based on the cellular automations.
Computational approach to large quantum dynamical problems
International Nuclear Information System (INIS)
Friesner, R.A.; Brunet, J.P.; Wyatt, R.E.; Leforestier, C.; Binkley, S.
1987-01-01
The organizational structure is described for a new program that permits computations on a variety of quantum mechanical problems in chemical dynamics and spectroscopy. Particular attention is devoted to developing and using algorithms that exploit the capabilities of current vector supercomputers. A key component in this procedure is the recursive transformation of the large sparse Hamiltonian matrix into a much smaller tridiagonal matrix. An application to time-dependent laser molecule energy transfer is presented. Rate of energy deposition in the multimode molecule for systematic variations in the molecular intermode coupling parameters is emphasized
Computational methods in calculating superconducting current problems
Brown, David John, II
Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the
Classroon demonstration: Foucault s currents explored by the computer hard disc (HD
Directory of Open Access Journals (Sweden)
Jorge Roberto Pimentel
2008-09-01
Full Text Available This paper making an experimental exploration of Foucault s currents (eddy currents through a rotor magnetically coupled to computer hard disc (HD that is no longer being used. The set-up allows to illustrate in a stimulating way electromagnetism classes in High Schools for mean of the qualitative observations of the currents which are created as consequence of the movement of an electric conductor in a region where a magnetic field exists.
On a hierarchy of Booleanfunctions hard to compute in constant depth
Directory of Open Access Journals (Sweden)
Anna Bernasconi
2001-12-01
Full Text Available Any attempt to find connections between mathematical properties and complexity has a strong relevance to the field of Complexity Theory. This is due to the lack of mathematical techniques to prove lower bounds for general models of computation. This work represents a step in this direction: we define a combinatorial property that makes Boolean functions `` hard '' to compute in constant depth and show how the harmonic analysis on the hypercube can be applied to derive new lower bounds on the size complexity of previously unclassified Boolean functions.
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
Directory of Open Access Journals (Sweden)
Laxmi A. Bewoor
2017-10-01
Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.
Advances in bio-inspired computing for combinatorial optimization problems
Pintea, Camelia-Mihaela
2013-01-01
Advances in Bio-inspired Combinatorial Optimization Problems' illustrates several recent bio-inspired efficient algorithms for solving NP-hard problems.Theoretical bio-inspired concepts and models, in particular for agents, ants and virtual robots are described. Large-scale optimization problems, for example: the Generalized Traveling Salesman Problem and the Railway Traveling Salesman Problem, are solved and their results are discussed.Some of the main concepts and models described in this book are: inner rule to guide ant search - a recent model in ant optimization, heterogeneous sensitive a
A Cognitive Model for Problem Solving in Computer Science
Parham, Jennifer R.
2009-01-01
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…
Solving satisfiability problems by the ground-state quantum computer
International Nuclear Information System (INIS)
Mao Wenjin
2005-01-01
A quantum algorithm is proposed to solve the satisfiability (SAT) problems by the ground-state quantum computer. The scale of the energy gap of the ground-state quantum computer is analyzed for the 3-bit exact cover problem. The time cost of this algorithm on the general SAT problems is discussed
Computational sieving applied to some classical number-theoretic problems
H.J.J. te Riele (Herman)
1998-01-01
textabstractMany problems in computational number theory require the application of some sieve. Efficient implementation of these sieves on modern computers has extended our knowledge of these problems considerably. This is illustrated by three classical problems: the Goldbach conjecture, factoring
Recycling potential of neodymium: the case of computer hard disk drives.
Sprecher, Benjamin; Kleijn, Rene; Kramer, Gert Jan
2014-08-19
Neodymium, one of the more critically scarce rare earth metals, is often used in sustainable technologies. In this study, we investigate the potential contribution of neodymium recycling to reducing scarcity in supply, with a case study on computer hard disk drives (HDDs). We first review the literature on neodymium production and recycling potential. From this review, we find that recycling of computer HDDs is currently the most feasible pathway toward large-scale recycling of neodymium, even though HDDs do not represent the largest application of neodymium. We then use a combination of dynamic modeling and empirical experiments to conclude that within the application of NdFeB magnets for HDDs, the potential for loop-closing is significant: up to 57% in 2017. However, compared to the total NdFeB production capacity, the recovery potential from HDDs is relatively small (in the 1-3% range). The distributed nature of neodymium poses a significant challenge for recycling of neodymium.
Structure problems in the analog computation
International Nuclear Information System (INIS)
Braffort, P.L.
1957-01-01
The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)
A hybrid metaheuristic for the time-dependent vehicle routing problem with hard time windows
Directory of Open Access Journals (Sweden)
N. Rincon-Garcia
2017-01-01
Full Text Available This article paper presents a hybrid metaheuristic algorithm to solve the time-dependent vehicle routing problem with hard time windows. Time-dependent travel times are influenced by different congestion levels experienced throughout the day. Vehicle scheduling without consideration of congestion might lead to underestimation of travel times and consequently missed deliveries. The algorithm presented in this paper makes use of Large Neighbourhood Search approaches and Variable Neighbourhood Search techniques to guide the search. A first stage is specifically designed to reduce the number of vehicles required in a search space by the reduction of penalties generated by time-window violations with Large Neighbourhood Search procedures. A second stage minimises the travel distance and travel time in an ‘always feasible’ search space. Comparison of results with available test instances shows that the proposed algorithm is capable of obtaining a reduction in the number of vehicles (4.15%, travel distance (10.88% and travel time (12.00% compared to previous implementations in reasonable time.
Computational physics problem solving with Python
Landau, Rubin H; Bordeianu, Cristian C
2015-01-01
The use of computation and simulation has become an essential part of the scientific process. Being able to transform a theory into an algorithm requires significant theoretical insight, detailed physical and mathematical understanding, and a working level of competency in programming. This upper-division text provides an unusually broad survey of the topics of modern computational physics from a multidisciplinary, computational science point of view. Its philosophy is rooted in learning by doing (assisted by many model programs), with new scientific materials as well as with the Python progr
Visual problems in young adults due to computer use.
Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L
2012-04-01
Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.
Marshall, Matthew M.; Carrano, Andres L.; Dannels, Wendy A.
2016-01-01
Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and…
Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm
Directory of Open Access Journals (Sweden)
Amjad Mahmood
2017-04-01
Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
Computing several eigenpairs of Hermitian problems by conjugate gradient iterations
International Nuclear Information System (INIS)
Ovtchinnikov, E.E.
2008-01-01
The paper is concerned with algorithms for computing several extreme eigenpairs of Hermitian problems based on the conjugate gradient method. We analyse computational strategies employed by various algorithms of this kind reported in the literature and identify their limitations. Our criticism is illustrated by numerical tests on a set of problems from electronic structure calculations and acoustics
Comprehension and computation in Bayesian problem solving
Directory of Open Access Journals (Sweden)
Eric D. Johnson
2015-07-01
Full Text Available Humans have long been characterized as poor probabilistic reasoners when presented with explicit numerical information. Bayesian word problems provide a well-known example of this, where even highly educated and cognitively skilled individuals fail to adhere to mathematical norms. It is widely agreed that natural frequencies can facilitate Bayesian reasoning relative to normalized formats (e.g. probabilities, percentages, both by clarifying logical set-subset relations and by simplifying numerical calculations. Nevertheless, between-study performance on transparent Bayesian problems varies widely, and generally remains rather unimpressive. We suggest there has been an over-focus on this representational facilitator (i.e. transparent problem structures at the expense of the specific logical and numerical processing requirements and the corresponding individual abilities and skills necessary for providing Bayesian-like output given specific verbal and numerical input. We further suggest that understanding this task-individual pair could benefit from considerations from the literature on mathematical cognition, which emphasizes text comprehension and problem solving, along with contributions of online executive working memory, metacognitive regulation, and relevant stored knowledge and skills. We conclude by offering avenues for future research aimed at identifying the stages in problem solving at which correct versus incorrect reasoners depart, and how individual difference might influence this time point.
A taxing problem: the complementary use of hard and soft OR in public policy
Cooper, C; Brown, J; Pidd, M
2004-01-01
A review of the UK personal taxation system used a combination of hard and soft OR approaches in a complementary way. The hard OR was based on data mining to increase understanding of individual taxpayers and their changing needs within the personal tax system. The soft OR was based on soft systems methodology with two aims in mind. First, to guide the review and, secondly, as an auditable approach for collecting the views of key internal and external stakeholders. The soft and hard OR were u...
Use of Software Programs as Zero Fill Help in Overcoming the Problem Around Hard Drive
Eko Prasetyo Nugroho; Fivtatianti Fivtatianti, Skom, MM
2003-01-01
Zero Fill, is a software tool programs that are designed for hard disk drive specially branded Quantum. This software is a tool programs that function to format the hard drive. Where is the type of format here is the first format or in other words the software to format the hard drive is working under conditions of low- level or commonly referred to as a low- level format. The advantages of this software is able to fix and remove all existing data within the disk, such as files...
Perceived problems with computer gaming and internet use among adolescents
DEFF Research Database (Denmark)
Holstein, Bjørn E; Pedersen, Trine Pagh; Bendtsen, Pernille
2014-01-01
BACKGROUND: Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer...... on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. RESULTS: The three new indexes showed high face validity and acceptable internal consistency. Most...... schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. CONCLUSION: The three...
Mental health problems among the survivors in the hard-hit areas of the Yushu earthquake.
Directory of Open Access Journals (Sweden)
Zhen Zhang
Full Text Available BACKGROUND: On April 14, 2010, an earthquake registering 7.1 on the Richter scale shook Qinghai Province in southwest China. The earthquake caused numerous casualties and much damage. The epicenter, Yushu County, suffered the most severe damage. As a part of the psychological relief work, the present study evaluated the mental health statuses of the people affected and identified the mental disorder risk factors related to earthquakes. METHODS: Five hundred and five earthquake survivors living in Yushu County were investigated 3-4 months after the earthquake. Participant demographic data including gender, age, marital status, ethnicity, educational level, and religious beliefs were collected. The Earthquake-Specific Trauma Exposure Indicators assessed the intensity of exposure to trauma during the earthquake. The PTSD Checklist-Civilian version (PCL-C and the Hopkins Symptoms Checklist-25 (HSCL-25 assessed the symptoms and prevalence rates of probable Posttraumatic Stress Disorder (PTSD as well as anxiety and depression, respectively. The Perceived Social Support Scale (PSSS evaluated subjective social support. RESULTS: The prevalence rates of probable PTSD, anxiety, and depression were 33.7%, 43.8% and 38.6%, respectively. Approximately one fifth of participants suffered from all three conditions. Individuals who were female, felt initial fear during the earthquake, and had less social support were the most likely to have poor mental health. CONCLUSIONS: The present study revealed that there are serious mental problems among the hard-hit survivors of the Yushu earthquake. Survivors at high risk for mental disorders should be specifically considered. The present study provides useful information for rebuilding and relief work.
Computational Nuclear Quantum Many-Body Problem: The UNEDF Project
Bogner, Scott; Bulgac, Aurel; Carlson, Joseph A.; Engel, Jonathan; Fann, George; Furnstahl, Richard J.; Gandolfi, Stefano; Hagen, Gaute; Horoi, Mihai; Johnson, Calvin W.; Kortelainen, Markus; Lusk, Ewing; Maris, Pieter; Nam, Hai Ah; Navratil, Petr
2013-01-01
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.
Computer Simulation in Problems of Thermal Strength
Directory of Open Access Journals (Sweden)
Olga I. Chelyapina
2012-05-01
Full Text Available This article discusses informative technology of using graphical programming environment LabVIEW 2009 when calculating and predicting the thermal strength of materials with an inhomogeneous structure. Algorithm for processing the experimental data was developed as part of the problem.
Problems of cranial computer-tomography
Energy Technology Data Exchange (ETDEWEB)
Seitz, D [Allgemeines Krankenhaus St. Georg, Hamburg (Germany, F.R.). Neurologische Abt.
1979-07-01
The author discusses the problems that have cropped up since the introduction of computerized tomography 5 years ago. To begin with, problems of contrast and object resolution are discussed with a special view to the importance of amipague imaging of cisterns, in particular in the detection of basal growing and displacing, intracranial processes. After this, the tasks of computerized tomography in neurological and neurosurgical emergencies, cerebrocranial injuries, cerebral circulation disturbances, inflammatory diseases of the central nervous systems, epileptic seizures, and chronical headaches are reviewed. Special regard is given to the problem of recurrent examinations and course control, especially in cerebral tumours and aresorptive hydrocephalus. Another paragraph deals with the correlation between CT findings, clinical symptoms, and clinical findings. The importance of cranial CT for neurological diagnoses is illustrated by the change of indications for conventional methods of examination. The limits of the method are shown and it is pointed out that cranial CT is not a search technique but that it requires previous examinations by a neurologist, neurosurgeon, or neuropaediatrician.
Problems of cranial computer-tomography
International Nuclear Information System (INIS)
Seitz, D.
1979-01-01
The author discusses the problems that have cropped up since the introduction of computerized tomography 5 years ago. To begin with, problems of contrast and object resolution are discussed with a special view to the importance of amipague imaging of cisterns, in particular in the detection of basal growing and displacing, intracranial processes. After this, the tasks of computerized tomography in neurological and neurosurgical emergencies, cerebrocranial injuries, cerebral circulation disturbances, inflammatory diseases of the central nervous systems, epileptic seizures, and chronical headaches are reviewed. Special regard is given to the problem of recurrent examinations and course control, especially in cerebral tumours and aresorptive hydrocephalus. Another paragraph deals with the correlation between CT findings, clinical symptoms, and clinical findings. The importance of cranial CT for neurological diagnoses is illustrated by the change of indications for conventional methods of examination. The limits of the method are shown and it is pointed out that cranial CT is not a search technique but that it requires previous examinations by a neurologist, neurosurgeon, or neuropaediatrician. (orig.) [de
Solving the Stokes problem on a massively parallel computer
DEFF Research Database (Denmark)
Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya
2001-01-01
boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....
Standard problems for structural computer codes
International Nuclear Information System (INIS)
Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.
1985-01-01
BNL is investigating the ranges of validity of the analytical methods used to predict the behavior of nuclear safety related structures under accidental and extreme environmental loadings. During FY 85, the investigations were concentrated on special problems that can significantly influence the outcome of the soil structure interaction evaluation process. Specially, limitations and applicability of the standard interaction methods when dealing with lift-off, layering and water table effects, were investigated. This paper describes the work and the results obtained during FY 85 from the studies on lift-off, layering and water-table effects in soil-structure interaction
Computer methods in physics 250 problems with guided solutions
Landau, Rubin H
2018-01-01
Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). Its also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.
Problems experienced by people with arthritis when using a computer.
Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester
2009-05-15
To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.
A neural algorithm for a fundamental computing problem.
Dasgupta, Sanjoy; Stevens, Charles F; Navlakha, Saket
2017-11-10
Similarity search-for example, identifying similar images in a database or similar documents on the web-is a fundamental computing problem faced by large-scale information retrieval systems. We discovered that the fruit fly olfactory circuit solves this problem with a variant of a computer science algorithm (called locality-sensitive hashing). The fly circuit assigns similar neural activity patterns to similar odors, so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly algorithm, however, uses three computational strategies that depart from traditional approaches. These strategies can be translated to improve the performance of computational similarity searches. This perspective helps illuminate the logic supporting an important sensory function and provides a conceptually new algorithm for solving a fundamental computational problem. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Horeni, O.; Arentze, T.A.; Dellaert, B.G.C.; Timmermans, H.J.P.
2014-01-01
This paper introduces the online Causal Network Elicitation Technique (CNET), as a technique for measuring components of mental representations of choice tasks and compares it with the more common technique of online 'hard' laddering (HL). While CNET works in basically two phases, one in open
Petters, Dean; Hogg, David
2014-01-01
Cognitive Science is a discipline that brings together research in natural and artificial systems and this is clearly reflected in the diverse contributions to From Animals to Robots and Back: Reflections on Hard Problems in the Study of Cognition. In tribute to Aaron Sloman and his pioneering work in Cognitive Science and Artificial Intelligence, the editors have collected a unique collection of cross-disciplinary papers that include work on: · intelligent robotics; · philosophy of cognitive science; · emotional research · computational vision; · comparative psychology; and · human-computer interaction. Key themes such as the importance of taking an architectural view in approaching cognition, run through the text. Drawing on the expertize of leading international researchers, contemporary debates in the study of natural and artificial cognition are addressed from complementary and contrasting perspectives with key issues being outlined at various levels of abstraction. From Animals to Robots and Back:...
Emotion Oriented Programming: Computational Abstractions for AI Problem Solving
Darty , Kevin; Sabouret , Nicolas
2012-01-01
International audience; In this paper, we present a programming paradigm for AI problem solving based on computational concepts drawn from Affective Computing. It is believed that emotions participate in human adaptability and reactivity, in behaviour selection and in complex and dynamic environments. We propose to define a mechanism inspired from this observation for general AI problem solving. To this purpose, we synthesize emotions as programming abstractions that represent the perception ...
Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa
2015-01-01
We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...
6th International Conference on Soft Computing for Problem Solving
Bansal, Jagdish; Das, Kedar; Lal, Arvind; Garg, Harish; Nagar, Atulya; Pant, Millie
2017-01-01
This two-volume book gathers the proceedings of the Sixth International Conference on Soft Computing for Problem Solving (SocProS 2016), offering a collection of research papers presented during the conference at Thapar University, Patiala, India. Providing a veritable treasure trove for scientists and researchers working in the field of soft computing, it highlights the latest developments in the broad area of “Computational Intelligence” and explores both theoretical and practical aspects using fuzzy logic, artificial neural networks, evolutionary algorithms, swarm intelligence, soft computing, computational intelligence, etc.
Singular problems in shell theory. Computing and asymptotics
Energy Technology Data Exchange (ETDEWEB)
Sanchez-Palencia, Evariste [Institut Jean Le Rond d' Alembert, Paris (France); Millet, Olivier [La Rochelle Univ. (France). LEPTIAB; Bechet, Fabien [Metz Univ. (France). LPMM
2010-07-01
It is known that deformations of thin shells exhibit peculiarities such as propagation of singularities, edge and internal layers, piecewise quasi inextensional deformations, sensitive problems and others, leading in most cases to numerical locking phenomena under several forms, and very poor quality of computations for small relative thickness. Most of these phenomena have a local and often anisotropic character (elongated in some directions), so that efficient numerical schemes should take them in consideration. This book deals with various topics in this context: general geometric formalism, analysis of singularities, numerical computing of thin shell problems, estimates for finite element approximation (including non-uniform and anisotropic meshes), mathematical considerations on boundary value problems in connection with sensitive problems encountered for very thin shells; and others. Most of numerical computations presented here use an adaptive anisotropic mesh procedure which allows a good computation of the physical peculiarities on one hand, and the possibility to perform automatic computations (without a previous mathematical description of the singularities) on the other. The book is recommended for PhD students, postgraduates and researchers who want to improve their knowledge in shell theory and in particular in the areas addressed (analysis of singularities, numerical computing of thin and very thin shell problems, sensitive problems). The lecture of the book may not be continuous and the reader may refer directly to the chapters concerned. (orig.)
Computer integration in the curriculum: promises and problems
Plomp, T.; van den Akker, Jan
1988-01-01
This discussion of the integration of computers into the curriculum begins by reviewing the results of several surveys conducted in the Netherlands and the United States which provide insight into the problems encountered by schools and teachers when introducing computers in education. Case studies
International Nuclear Information System (INIS)
Dobrzanski, L.A.; Sitek, W.; Zaclona, J.
2004-01-01
The paper presents the method of modelling of high-speed steels' (HSS) properties, being basing on chemical composition and heat treatment parameters, employing neural networks. An example of its application possibility the computer simulation was made of the influence of the particular alloying elements on hardness and obtained results are presented. (author)
Hybrid setup for micro- and nano-computed tomography in the hard X-ray range
Fella, Christian; Balles, Andreas; Hanke, Randolf; Last, Arndt; Zabler, Simon
2017-12-01
With increasing miniaturization in industry and medical technology, non-destructive testing techniques are an area of ever-increasing importance. In this framework, X-ray microscopy offers an efficient tool for the analysis, understanding, and quality assurance of microscopic samples, in particular as it allows reconstructing three-dimensional data sets of the whole sample's volume via computed tomography (CT). The following article describes a compact X-ray microscope in the hard X-ray regime around 9 keV, based on a highly brilliant liquid-metal-jet source. In comparison to commercially available instruments, it is a hybrid that works in two different modes. The first one is a micro-CT mode without optics, which uses a high-resolution detector to allow scans of samples in the millimeter range with a resolution of 1 μm. The second mode is a microscope, which contains an X-ray optical element to magnify the sample and allows resolving 150 nm features. Changing between the modes is possible without moving the sample. Thus, the instrument represents an important step towards establishing high-resolution laboratory-based multi-mode X-ray microscopy as a standard investigation method.
Applying natural evolution for solving computational problems - Lecture 1
CERN. Geneva
2017-01-01
Darwin’s natural evolution theory has inspired computer scientists for solving computational problems. In a similar way to how humans and animals have evolved along millions of years, computational problems can be solved by evolving a population of solutions through generations until a good solution is found. In the first lecture, the fundaments of evolutionary computing (EC) will be described, covering the different phases that the evolutionary process implies. ECJ, a framework for researching in such field, will be also explained. In the second lecture, genetic programming (GP) will be covered. GP is a sub-field of EC where solutions are actual computational programs represented by trees. Bloat control and distributed evaluation will be introduced.
Applying natural evolution for solving computational problems - Lecture 2
CERN. Geneva
2017-01-01
Darwin’s natural evolution theory has inspired computer scientists for solving computational problems. In a similar way to how humans and animals have evolved along millions of years, computational problems can be solved by evolving a population of solutions through generations until a good solution is found. In the first lecture, the fundaments of evolutionary computing (EC) will be described, covering the different phases that the evolutionary process implies. ECJ, a framework for researching in such field, will be also explained. In the second lecture, genetic programming (GP) will be covered. GP is a sub-field of EC where solutions are actual computational programs represented by trees. Bloat control and distributed evaluation will be introduced.
Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei
2013-10-01
The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Computational nuclear quantum many-body problem: The UNEDF project
Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.
2013-10-01
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.
New computational methodology for large 3D neutron transport problems
International Nuclear Information System (INIS)
Dahmani, M.; Roy, R.; Koclas, J.
2004-01-01
We present a new computational methodology, based on 3D characteristics method, dedicated to solve very large 3D problems without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we set up a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (authors)
Digital dice computational solutions to practical probability problems
Nahin, Paul J
2013-01-01
Some probability problems are so difficult that they stump the smartest mathematicians. But even the hardest of these problems can often be solved with a computer and a Monte Carlo simulation, in which a random-number generator simulates a physical process, such as a million rolls of a pair of dice. This is what Digital Dice is all about: how to get numerical answers to difficult probability problems without having to solve complicated mathematical equations. Popular-math writer Paul Nahin challenges readers to solve twenty-one difficult but fun problems, from determining the
Computational Biomechanics Theoretical Background and BiologicalBiomedical Problems
Tanaka, Masao; Nakamura, Masanori
2012-01-01
Rapid developments have taken place in biological/biomedical measurement and imaging technologies as well as in computer analysis and information technologies. The increase in data obtained with such technologies invites the reader into a virtual world that represents realistic biological tissue or organ structures in digital form and allows for simulation and what is called “in silico medicine.” This volume is the third in a textbook series and covers both the basics of continuum mechanics of biosolids and biofluids and the theoretical core of computational methods for continuum mechanics analyses. Several biomechanics problems are provided for better understanding of computational modeling and analysis. Topics include the mechanics of solid and fluid bodies, fundamental characteristics of biosolids and biofluids, computational methods in biomechanics analysis/simulation, practical problems in orthopedic biomechanics, dental biomechanics, ophthalmic biomechanics, cardiovascular biomechanics, hemodynamics...
On Wigner's problem, computability theory, and the definition of life
International Nuclear Information System (INIS)
Swain, J.
1998-01-01
In 1961, Eugene Wigner presented a clever argument that in a world which is adequately described by quantum mechanics, self-reproducing systems in general, and perhaps life in particular, would be incredibly improbable. The problem and some attempts at its solution are examined, and a new solution is presented based on computability theory. In particular, it is shown that computability theory provides limits on what can be known about a system in addition to those which arise from quantum mechanics. (author)
Numerical problems with the Pascal triangle in moment computation
Czech Academy of Sciences Publication Activity Database
Kautsky, J.; Flusser, Jan
2016-01-01
Roč. 306, č. 1 (2016), s. 53-68 ISSN 0377-0427 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : moment computation * Pascal triangle * appropriate polynomial basis * numerical problems Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0459096.pdf
One-dimensional computational modeling on nuclear reactor problems
International Nuclear Information System (INIS)
Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos
2013-01-01
In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)
Molecular computing towards a novel computing architecture for complex problem solving
Chang, Weng-Long
2014-01-01
This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...
The actual problems of the standardization of magnetically hard materials and permanent magnets
International Nuclear Information System (INIS)
Kurbatov, P.A.; Podolskiy, I.D.
1998-01-01
The standardization of industrial products raises their accordance with functional purpose, contributes to technological developments and the elimination of technical barriers in trade. The progress of the world trade necessitates the certification of permanent magnets and their manufacturing methods. According to ISO/IEC recommendations, the certification standards should contain the clear requirements to operation parameters of products, that can be impartially controlled. The testing procedures should be clearly formulated and assure that the results may be reproduced. This calls for creation of a system of interconnected certification standards: the standard for technical characteristics of prospective commercial magnetically hard materials, the standard specifications for permanent magnets, the standards for typical testing procedures and the standards for metrological assurance of measurements. (orig.)
On the problem of creation of structural materials on the basis of hard alloys
International Nuclear Information System (INIS)
Kajbyshev, O.A.; Merzhanov, A.G.; Zaripov, N.G.; Bloshenko, V.N.; Bokij, V.A.; Efimov, O.Yu.
1992-01-01
Chemical composition and structure of refractory skeletons produced by the methods of self-propagating high temperature synthesis (SHS) and powder metallurgy were studied for their effects on high temperature mechanical properties hard alloys on these skeletons base. Porous skeletons were obtained on the base of TiC 0.55 ; TiC 0.65 ; TiC 0.75 ; TiC 0.85 and TiC 1.0 carbides with their subsequent impregnation with heat resisting nickel base alloy ZhS6U. It was shown that a sintered skeleton was prone to fracture while SHS-skeleton preserved its structure. Optimal operating temperature of materials considered was noted to depend on the temperatures of brittle-ductile transition and transition into superplastic stable of refractory phase
[Problem list in computer-based patient records].
Ludwig, C A
1997-01-14
Computer-based clinical information systems are capable of effectively processing even large amounts of patient-related data. However, physicians depend on rapid access to summarized, clearly laid out data on the computer screen to inform themselves about a patient's current clinical situation. In introducing a clinical workplace system, we therefore transformed the problem list-which for decades has been successfully used in clinical information management-into an electronic equivalent and integrated it into the medical record. The table contains a concise overview of diagnoses and problems as well as related findings. Graphical information can also be integrated into the table, and an additional space is provided for a summary of planned examinations or interventions. The digital form of the problem list makes it possible to use the entire list or selected text elements for generating medical documents. Diagnostic terms for medical reports are transferred automatically to corresponding documents. Computer technology has an immense potential for the further development of problem list concepts. With multimedia applications sound and images will be included in the problem list. For hyperlink purpose the problem list could become a central information board and table of contents of the medical record, thus serving as the starting point for database searches and supporting the user in navigating through the medical record.
A Heuristic Design Information Sharing Framework for Hard Discrete Optimization Problems
National Research Council Canada - National Science Library
Jacobson, Sheldon H
2007-01-01
.... This framework has been used to gain new insights into neighborhood structure designs that allow different neighborhood functions to share information when using the same heuristic applied to the same problem...
A Hard Road: Driving Local Action against Alcohol Related Problems in a Rural Town
Directory of Open Access Journals (Sweden)
Julaine Allan
2012-12-01
Full Text Available Context is important in developing strategies to address alcohol related violence. Knowledge of local conditions is critical to action in rural areas. The aim of this study was to gather information about context specific alcohol related problems experienced by frontline workers in a regional centre to inform the local alcohol action plan. Frontline workers were invited to participate in one of five focus group discussions that investigated problems experienced as a result of other people’s alcohol use. Alcohol related problems were more frequently associated with time periods than any single group in the community. Social media was used to incite arguments between groups in different venues during the lock-out periods. The focus groups identified that the location of licensed premises and a taxi rank; and previous relationships between protagonists were the key contextual factors causing alcohol related problems. A second taxi rank was identified as a useful local management strategy. Supply reduction was suggested as a key factor in long term solutions to alcohol related problems in rural towns. The local liquor accord did not want to reduce supply of alcohol by closing late night venues earlier. Local action to reduce alcohol related problems will be limited to pragmatic solutions because supply reduction is unacceptable to those in the business of selling alcohol.
Computer Use and Vision‑Related Problems Among University ...
African Journals Online (AJOL)
and adjusted OR was calculated using the multiple logistic regression. Results: The ... Nearly 72% of students reported frequent interruption of computer work. Headache ... procedure (non-probability sampling) recruiting 250 .... Table 1: Percentage distribution of visual problems among different genders and ethnic groups.
Computer Use and Behavior Problems in Twice-Exceptional Students
Alloway, Tracy Packiam; Elsworth, Miquela; Miley, Neal; Seckinger, Sean
2016-01-01
This pilot study investigated how engagement with computer games and TV exposure may affect behaviors of gifted students. We also compared behavioral and cognitive profiles of twice-exceptional students and children with Attention Deficit/Hyperactivity Disorder (ADHD). Gifted students were divided into those with behavioral problems and those…
Solving a Hamiltonian Path Problem with a bacterial computer
Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T
2009-01-01
Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof
Solving a Hamiltonian Path Problem with a bacterial computer
Directory of Open Access Journals (Sweden)
Treece Jessica
2009-07-01
Full Text Available Abstract Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node
Second International Conference on Soft Computing for Problem Solving
Nagar, Atulya; Deep, Kusum; Pant, Millie; Bansal, Jagdish; Ray, Kanad; Gupta, Umesh
2014-01-01
The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2012), held at JK Lakshmipat University, Jaipur, India. This book provides the latest developments in the area of soft computing and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining, etc. The objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.
Third International Conference on Soft Computing for Problem Solving
Deep, Kusum; Nagar, Atulya; Bansal, Jagdish
2014-01-01
The present book is based on the research papers presented in the 3rd International Conference on Soft Computing for Problem Solving (SocProS 2013), held as a part of the golden jubilee celebrations of the Saharanpur Campus of IIT Roorkee, at the Noida Campus of Indian Institute of Technology Roorkee, India. This book is divided into two volumes and covers a variety of topics including mathematical modelling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining etc. Particular emphasis is laid on soft computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems, which are otherwise difficult to solve by the usual and traditional methods. The book is directed ...
arXiv Spin models in complex magnetic fields: a hard sign problem
de Forcrand, Philippe
2018-01-01
Coupling spin models to complex external fields can give rise to interesting phenomena like zeroes of the partition function (Lee-Yang zeroes, edge singularities) or oscillating propagators. Unfortunately, it usually also leads to a severe sign problem that can be overcome only in special cases; if the partition function has zeroes, the sign problem is even representation-independent at these points. In this study, we couple the N-state Potts model in different ways to a complex external magnetic field and discuss the above mentioned phenomena and their relations based on analytic calculations (1D) and results obtained using a modified cluster algorithm (general D) that in many cases either cures or at least drastically reduces the sign-problem induced by the complex external field.
Marshall, Matthew M; Carrano, Andres L; Dannels, Wendy A
2016-10-01
Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and best practices of STEM instruction to give first-year DHH students enrolled in a postsecondary STEM program the opportunity to develop problem-solving skills in real-world scenarios. Using an industrial engineering laboratory that provides manufacturing and warehousing environments, students were immersed in real-world scenarios in which they worked on teams to address prescribed problems encountered during the activities. The highly structured, Plan-Do-Check-Act approach commonly used in industry was adapted for the DHH student participants to document and communicate the problem-solving steps. Students who experienced the intervention realized a 14.6% improvement in problem-solving proficiency compared with a control group, and this gain was retained at 6 and 12 months, post-intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comment on "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit".
Carrillo-Bernal, M A; Núñez-Yépez, H N; Salas-Brito, A L; Solis, Didier A
2015-02-01
In the referred paper, the authors use a numerical method for solving ordinary differential equations and a softened Coulomb potential -1/√[x(2)+β(2)] to study the one-dimensional Coulomb problem by approaching the parameter β to zero. We note that even though their numerical findings in the soft potential scenario are correct, their conclusions do not extend to the one-dimensional Coulomb problem (β=0). Their claims regarding the possible existence of an even ground state with energy -∞ with a Dirac-δ eigenfunction and of well-defined parity eigenfunctions in the one-dimensional hydrogen atom are questioned.
1991-06-01
Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent
A new approach for solution of vehicle routing problem with hard ...
Indian Academy of Sciences (India)
SERAP ERCAN CжMERT
2017-11-27
Nov 27, 2017 ... to deal with a large size real problem and to solve it in a short time using the exact method. Finally .... programming model was used for VRPTW with pickup and ...... [3] Han J and Kamber M 2001 Data mining and concepts.
Internet computer coaches for introductory physics problem solving
Xu Ryan, Qing
The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.
Approximability of optimization problems through adiabatic quantum computation
Cruz-Santos, William
2014-01-01
The adiabatic quantum computation (AQC) is based on the adiabatic theorem to approximate solutions of the Schrödinger equation. The design of an AQC algorithm involves the construction of a Hamiltonian that describes the behavior of the quantum system. This Hamiltonian is expressed as a linear interpolation of an initial Hamiltonian whose ground state is easy to compute, and a final Hamiltonian whose ground state corresponds to the solution of a given combinatorial optimization problem. The adiabatic theorem asserts that if the time evolution of a quantum system described by a Hamiltonian is l
4th International Conference on Soft Computing for Problem Solving
Deep, Kusum; Pant, Millie; Bansal, Jagdish; Nagar, Atulya
2015-01-01
This two volume book is based on the research papers presented at the 4th International Conference on Soft Computing for Problem Solving (SocProS 2014) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and healthcare, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.
Problems and limitation in diagnosis with computed tomography
International Nuclear Information System (INIS)
Ohtomo, Eiichi
1985-01-01
The development and explosion of computed tomography (CT) machines have forced a revolution in the diagnosis of nervous diseases, making it very easy to detect cerebrovascular disorders, cerebral atrophy, and ventricular dilation. However, as much information on various brain diseases has been available by CT, diagnostic problems and limitations of CT have becoming evident. This paper outlines CT problems and limitations in diagnosing cerebrovascular disorders, cerebral tumors, inflammation, head trauma, and cerebral atrophy; and discusses their relation to adults and elderly people. (Namekawa, K.)
Wang, Wenlong; Mandrà, Salvatore; Katzgraber, Helmut
We propose a patch planting heuristic that allows us to create arbitrarily-large Ising spin-glass instances on any topology and with any type of disorder, and where the exact ground-state energy of the problem is known by construction. By breaking up the problem into patches that can be treated either with exact or heuristic solvers, we can reconstruct the optimum of the original, considerably larger, problem. The scaling of the computational complexity of these instances with various patch numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and quantum annealing on the D-Wave 2X quantum annealer. The method can be useful for benchmarking of novel computing technologies and algorithms. NSF-DMR-1208046 and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002.
Computer codes for problems of isotope and radiation research
International Nuclear Information System (INIS)
Remer, M.
1986-12-01
A survey is given of computer codes for problems in isotope and radiation research. Altogether 44 codes are described as titles with abstracts. 17 of them are in the INIS scope and are processed individually. The subjects are indicated in the chapter headings: 1) analysis of tracer experiments, 2) spectrum calculations, 3) calculations of ion and electron trajectories, 4) evaluation of gamma irradiation plants, and 5) general software
THE CLOUD COMPUTING INTRODUCTION IN EDUCATION: PROBLEMS AND PERSPECTIVES
Directory of Open Access Journals (Sweden)
Y. Dyulicheva
2013-03-01
Full Text Available The problems and perspectives of the cloud computing usage in education are investigated in the paper. The examples of the most popular cloud platforms such as Google Apps Education Edition and Microsoft Live@edu used in education are considered. The schema of an interaction between teachers and students in cloud is proposed. The abilities of the cloud storage such as Microsoft SkyDrive and Apple iCloud are considered.
THE CLOUD COMPUTING INTRODUCTION IN EDUCATION: PROBLEMS AND PERSPECTIVES
Y. Dyulicheva
2013-01-01
The problems and perspectives of the cloud computing usage in education are investigated in the paper. The examples of the most popular cloud platforms such as Google Apps Education Edition and Microsoft Live@edu used in education are considered. The schema of an interaction between teachers and students in cloud is proposed. The abilities of the cloud storage such as Microsoft SkyDrive and Apple iCloud are considered.
Approximation and hardness results for the maximum edge q-coloring problem
DEFF Research Database (Denmark)
Adamaszek, Anna Maria; Popa, Alexandru
2016-01-01
We consider the problem of coloring edges of a graph subject to the following constraints: for every vertex v, all the edges incident with v have to be colored with at most q colors. The goal is to find a coloring satisfying the above constraints and using the maximum number of colors. Notice...... ϵ>0 and any q≥2 assuming the unique games conjecture (UGC), or 1+−ϵ for any ϵ>0 and any q≥3 (≈1.19 for q=2) assuming P≠NP. These results hold even when the considered graphs are bipartite. On the algorithmic side, we restrict to the case q=2, since this is the most important in practice and we show...... a 5/3-approximation algorithm for graphs which have a perfect matching....
Why is risk communication hardly applied in Japan? Psychological problem of scientific experts
International Nuclear Information System (INIS)
Kosugi, Motoko; Tsuchiya, Tomoko; Taniguchi, Taketoshi
2000-01-01
The purpose of this paper is to discuss the problems that impair to communicate about technological risks with the public in Japan, especially focusing on views of experts as a supplier of risk information. In this study, we also clarified through the questionnaire surveys that there were significant differences of risk perception and of information environment about science and technology between the public and scientific experts, as many previous studies showed. And most important fact is that experts perceive the difference in risk perception between the public and experts larger than the public does. We conclude that this experts' cognition impedes to take a first step toward communicating with the public about technological risks. (author)
Burrows, J.; Johnson, V.; Henckel, D.
2016-01-01
Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B
Measuring systems of hard to get objects: problems with analysis of measurement results
Gilewska, Grazyna
2005-02-01
The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.
Directory of Open Access Journals (Sweden)
Anja Fischer
2015-06-01
Full Text Available One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM models and permuted variable length Markov (PVLM models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP, the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP. Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.
Gent, Tiejo van
2012-01-01
The aim of this thesis is to expand the knowledge of mental health problems with deaf and severely hard of hearing children and adolescents in the following domains: 1. The prevalence of mental health problems; 2. Specific intra- and interpersonal aspects of pathogenesis; 3. characteristics of the
Evolving hard problems: Generating human genetics datasets with a complex etiology
Directory of Open Access Journals (Sweden)
Himmelstein Daniel S
2011-07-01
Full Text Available Abstract Background A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Results Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. Conclusions This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.
Radiation interlocks - The choice between conventional hard-wired logic and computer-based systems
International Nuclear Information System (INIS)
Crook, K.F.
1987-01-01
During the past few years, the use of computers in radiation safety systems has become more widespread. This is not surprising given the ubiquitous nature of computers in the modern technological world. But is a computer a good choice for the central logic element of a personnel safety system? Recent accidents at computer controlled medical accelerators would indicate that extreme care must be exercised if malfunctions are to be avoided. The Department of Energy (DOE) has recently established a sub-committee to formulate recommendations on the use of computers in safety systems for accelerators. This paper reviews the status of the committee's recommendations, and describes radiation protection interlock systems as applied to both accelerators and to irradiation facilities. Comparisons are made between the conventional (relay) approach and designs using computers
Radiation interlocks: The choice between conventional hard-wired logic and computer-based systems
International Nuclear Information System (INIS)
Crook, K.F.
1986-11-01
During the past few years, the use of computers in radiation safety systems has become more widespread. This is not surprising given the ubiquitous nature of computers in the modern technological world. But is a computer a good choice for the central logic element of a personnel safety system. Recent accidents at computer controlled medical accelerators would indicate that extreme care must be exercised if malfunctions are to be avoided. The Department of Energy has recently established a sub-committee to formulate recommendations on the use of computers in safety systems for accelerators. This paper will review the status of the committee's recommendations, and describe radiation protection interlock systems as applied to both accelerators and to irradiation facilities. Comparisons are made between the conventional relay approach and designs using computers. 6 refs., 6 figs
FLASHRAD: A 3D Rad Hard Memory Module For High Performance Space Computers, Phase I
National Aeronautics and Space Administration — The computing capabilities of onboard spacecraft are a major limiting factor for accomplishing many classes of future missions. Although technology development...
A Parallel Computational Model for Multichannel Phase Unwrapping Problem
Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo
2015-05-01
In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.
Crowd Computing as a Cooperation Problem: An Evolutionary Approach
Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel
2013-05-01
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.
The limits of quantum computers
International Nuclear Information System (INIS)
Aaronson, S.
2008-01-01
Future computers, which work with quantum bits, would indeed solve some special problems extremely fastly, but for the most problems the would hardly be superior to contemporary computers. This knowledge could manifest a new fundamental physical principle
Continuous-Variable Quantum Computation of Oracle Decision Problems
Adcock, Mark R. A.
Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying
Computer utilization for the solution of gas supply problems
Energy Technology Data Exchange (ETDEWEB)
Raleigh, J T; Brady, J R
1968-01-01
The computer programs in this paper have proven to be useful tools in the solution of gas supply problems. Some of the management type of applications are: (1) long range planning projects; (2) comparison of various proposed gas purchase contracts; (3) to assist with budget and operational planning; (4) to assist in making cost-of-servic and rate predictions; (5) to investigate the feasibility of processing plants at any point on the system; and (6) to assist dispatching in its daily operation for cost and quality control. Competition, not only from the gas industry, but also from other forms of energy, makes it imperative that quantitative and economic information with regard to that marketable resource be available under a variety of assumptions and alternatives. This information can best be made available in a timely manner by the use of the computer.
5th International Conference on Soft Computing for Problem Solving
Deep, Kusum; Bansal, Jagdish; Nagar, Atulya; Das, Kedar
2016-01-01
This two volume book is based on the research papers presented at the 5th International Conference on Soft Computing for Problem Solving (SocProS 2015) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.
Application of computational fluid mechanics to atmospheric pollution problems
Hung, R. J.; Liaw, G. S.; Smith, R. E.
1986-01-01
One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.
Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills
Polyak, Stephen T.; von Davier, Alina A.; Peterschmidt, Kurt
2017-01-01
This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses. PMID:29238314
Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills
Directory of Open Access Journals (Sweden)
Stephen T. Polyak
2017-11-01
Full Text Available This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.
Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills.
Polyak, Stephen T; von Davier, Alina A; Peterschmidt, Kurt
2017-01-01
This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.
Remotely Telling Humans and Computers Apart: An Unsolved Problem
Hernandez-Castro, Carlos Javier; Ribagorda, Arturo
The ability to tell humans and computers apart is imperative to protect many services from misuse and abuse. For this purpose, tests called CAPTCHAs or HIPs have been designed and put into production. Recent history shows that most (if not all) can be broken given enough time and commercial interest: CAPTCHA design seems to be a much more difficult problem than previously thought. The assumption that difficult-AI problems can be easily converted into valid CAPTCHAs is misleading. There are also some extrinsic problems that do not help, especially the big number of in-house designs that are put into production without any prior public critique. In this paper we present a state-of-the-art survey of current HIPs, including proposals that are now into production. We classify them regarding their basic design ideas. We discuss current attacks as well as future attack paths, and we also present common errors in design, and how many implementation flaws can transform a not necessarily bad idea into a weak CAPTCHA. We present examples of these flaws, using specific well-known CAPTCHAs. In a more theoretical way, we discuss the threat model: confronted risks and countermeasures. Finally, we introduce and discuss some desirable properties that new HIPs should have, concluding with some proposals for future work, including methodologies for design, implementation and security assessment.
van Loo, D.; Speijer, R.; Masschaele, B.; Dierick, M.; Cnudde, V.; Boone, M.; de Witte, Y.; Dewanckele, J.; van Hoorebeke, L.; Jacobs, P.
2009-04-01
Foraminifera (Forams) are single-celled amoeba-like organisms in the sea, which build a tiny calcareous multi-chambered shell for protection. Their enormous abundance, great variation of shape through time and their presence in all marine deposits made these tiny microfossils the oil companies' best friend by facilitating the detection of new oil wells. Besides the success of forams in the oil and gas industry, they are also a most powerful tool for reconstructing climate change in the past. The shell of a foraminifer is a tiny gold mine of information both geometrical as chemical. However, until recently the best information on this architecture was only obtained through imaging the outside of a shell with Scanning Electron Microscopy (SEM), giving no clues towards internal structures other than single snapshots through breaking a specimen apart. With X-ray computed tomography (CT) it is possible to overcome this problem and uncover a huge amount of geometrical information without destructing the samples. Using the last generation of micro-CT's, called nano-CT, because of the sub-micron resolution, it is now possible to perform adequate imaging even on these tiny samples without needing huge facilities. In this research, a comparison is made between different X-ray sources and X-ray detectors and the resulting image resolution. Both sharpness, noise and contrast are very important parameters that will have important effects on the accuracy of the results and on the speed of data-processing. Combining this tomography technique with specific image processing software, called segmentation, it is possible to obtain a 3D virtual representation of the entire forams shell. This 3D virtual object can then be used for many purposes, from which automatic measurement of the chambers size is one of the most important ones. The segmentation process is a combination of several algorithms that are often used in CT evaluation, in this work an evaluation of those algorithms is
Nonpher: computational method for design of hard-to-synthesize structures
Czech Academy of Sciences Publication Activity Database
Voršilák, M.; Svozil, Daniel
2017-01-01
Roč. 9, březen (2017), č. článku 20. ISSN 1758-2946 R&D Projects: GA MŠk LO1220; GA MŠk LM2015063 Institutional support: RVO:68378050 Keywords : synthetic feasibility * molecular complexity * molecular morphing Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 4.220, year: 2016
Fluid history computation methods for reactor safeguards problems using MNODE computer program
International Nuclear Information System (INIS)
Huang, Y.S.; Savery, C.W.
1976-10-01
A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975
Rock, Nicholas M. S.
This review covers rock, mineral and isotope geochemistry, mineralogy, igneous and metamorphic petrology, and volcanology. Crystallography, exploration geochemistry, and mineral exploration are excluded. Fairly extended comments on software availability, and on computerization of the publication process and of specimen collection indexes, may interest a wider audience. A proliferation of both published and commercial software in the past 3 years indicates increasing interest in what traditionally has been a rather reluctant sphere of geoscience computer activity. However, much of this software duplicates the same old functions (Harker and triangular plots, mineral recalculations, etc.). It usually is more efficient nowadays to use someone else's program, or to employ the command language in one of many general-purpose spreadsheet or statistical packages available, than to program a specialist operation from scratch in, say, FORTRAN. Greatest activity has been in mineralogy, where several journals specifically encourage publication of computer-related activities, and IMA and MSA Working Groups on microcomputers have been convened. In petrology and geochemistry, large national databases of rock and mineral analyses continue to multiply, whereas the international database IGBA grows slowly; some form of integration is necessary to make these disparate systems of lasting value to the global "hard-rock" community. Total merging or separate addressing via an intelligent "front-end" are both possibilities. In volcanology, the BBC's videodisk Volcanoes and the Smithsonian Institution's Global Volcanism Project use the most up-to-date computer technology in an exciting and innovative way, to promote public education.
Computer codes for the analysis of flask impact problems
International Nuclear Information System (INIS)
Neilson, A.J.
1984-09-01
This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)
Hard electronics; Hard electronics
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.
Numerical and analytical solutions for problems relevant for quantum computers
International Nuclear Information System (INIS)
Spoerl, Andreas
2008-01-01
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Engineering and Computing Portal to Solve Environmental Problems
Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.
2018-01-01
This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.
Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story
Gunbas, N.
2015-01-01
The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…
Habib, Komal; Parajuly, Keshav; Wenzel, Henrik
2015-10-20
Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.
Computer Use and Vision.Related Problems Among University ...
African Journals Online (AJOL)
Related Problems Among University Students In Ajman, United Arab Emirate. ... of 500 Students studying in Gulf Medical University, Ajman and Ajman University of ... prevalence of vision related problems was noted among university students.
Internet Computer Coaches for Introductory Physics Problem Solving
Xu Ryan, Qing
2013-01-01
The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the…
A computational intelligence approach to the Mars Precision Landing problem
Birge, Brian Kent, III
Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over
Experience of computed tomographic myelography and discography in cervical problem
Energy Technology Data Exchange (ETDEWEB)
Nakatani, Shigeru; Yamamoto, Masayuki; Uratsuji, Masaaki; Suzuki, Kunio; Matsui, Eigo [Hyogo Prefectural Awaji Hospital, Sumoto, Hyogo (Japan); Kurihara, Akira
1983-06-01
CTM (computed tomographic myelography) was performed on 15 cases of cervical lesions, and on 5 of them, CTD (computed tomographic discography) was also made. CTM revealed the intervertebral state, and in combination with CTD, providing more accurate information. The combined method of CTM and CTD was useful for soft disc herniation.
[The current state of the brain-computer interface problem].
Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A
2015-01-01
It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.
Solving Dynamic Battlespace Movement Problems Using Dynamic Distributed Computer Networks
National Research Council Canada - National Science Library
Bradford, Robert
2000-01-01
.... The thesis designs a system using this architecture that invokes operations research network optimization algorithms to solve problems involving movement of people and equipment over dynamic road networks...
2010-01-01
Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological
D'Onofrio, David J; An, Gary
2010-01-21
The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an
Directory of Open Access Journals (Sweden)
D'Onofrio David J
2010-01-01
Full Text Available Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1 orthogonal uniqueness, (2 low level formatting, (3 high level formatting and (4 translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT during high level formatting of the computer hard drive and the subsequent loading of an operating
Polyhedral Computations for the Simple Graph Partitioning Problem
DEFF Research Database (Denmark)
Sørensen, Michael Malmros
The simple graph partitioning problem is to partition an edge-weighted graph into mutually disjoint subgraphs, each containing no more than b nodes, such that the sum of the weights of all edges in the subgraphs is maximal. In this paper we present a branch-and-cut algorithm for the problem that ...
Models for the discrete berth allocation problem: A computational comparison
DEFF Research Database (Denmark)
Buhrkal, Katja Frederik; Zuglian, Sara; Røpke, Stefan
2011-01-01
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe three main models of the discrete dynamic berth allocation...
Models for the Discrete Berth Allocation Problem: A Computational Comparison
DEFF Research Database (Denmark)
Buhrkal, Katja; Zuglian, Sara; Røpke, Stefan
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe the three main models of the discrete dynamic berth allocation...
Assessment of computer-related health problems among post-graduate nursing students.
Khan, Shaheen Akhtar; Sharma, Veena
2013-01-01
The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.
A Computer Simulation for Teaching Diagnosis of Secondary Ignition Problems
Diedrick, Walter; Thomas, Rex
1977-01-01
Presents the methodology and findings of an experimental project to determine the viability of computer assisted as opposed to more traditional methods of instruction for teaching one phase of automotive troubleshooting. (Editor)
Computational chemical product design problems under property uncertainties
DEFF Research Database (Denmark)
Frutiger, Jerome; Cignitti, Stefano; Abildskov, Jens
2017-01-01
Three different strategies of how to combine computational chemical product design with Monte Carlo based methods for uncertainty analysis of chemical properties are outlined. One method consists of a computer-aided molecular design (CAMD) solution and a post-processing property uncertainty...... fluid design. While the higher end of the uncertainty range of the process model output is similar for the best performing fluids, the lower end of the uncertainty range differs largely....
A Hybrid Soft Computing Approach for Subset Problems
Directory of Open Access Journals (Sweden)
Broderick Crawford
2013-01-01
Full Text Available Subset problems (set partitioning, packing, and covering are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S can be partitioned into smaller subsets. All items in S must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions and set covering (all items must be contained in at least one partition. Here, we present a hybrid solver based on ant colony optimization (ACO combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.
Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C
2012-09-01
Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.
Energy Technology Data Exchange (ETDEWEB)
Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)
2015-08-15
The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.
DEFF Research Database (Denmark)
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Solving wood chip transport problems with computer simulation.
Dennis P. Bradley; Sharon A. Winsauer
1976-01-01
Efficient chip transport operations are difficult to achieve due to frequent and often unpredictable changes in distance to market, chipping rate, time spent at the mill, and equipment costs. This paper describes a computer simulation model that allows a logger to design an efficient transport system in response to these changing factors.
Computable majorants of the limit load in Hencky's plasticity problems
Czech Academy of Sciences Publication Activity Database
Repin, S.; Sysala, Stanislav; Haslinger, Jaroslav
2018-01-01
Roč. 75, č. 1 (2018), s. 199-217 ISSN 0898-1221 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : computable bounds * divergence free fields * Hencky's plasticity * limit load * penalization Subject RIV: BA - General Mathematics Impact factor: 1.531, year: 2016 http://www.sciencedirect.com/science/article/pii/S0898122117305552
Computational approach to Thornley's problem by bivariate operational calculus
Bazhlekova, E.; Dimovski, I.
2012-10-01
Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.
Solving project scheduling problems by minimum cut computations
Möhring, R.H.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen
In project scheduling, a set of precedence-constrained jobs has to be scheduled so as to minimize a given objective. In resource-constrained project scheduling, the jobs additionally compete for scarce resources. Due to its universality, the latter problem has a variety of applications in
Solving the Curriculum Sequencing Problem with DNA Computing Approach
Debbah, Amina; Ben Ali, Yamina Mohamed
2014-01-01
In the e-learning systems, a learning path is known as a sequence of learning materials linked to each others to help learners achieving their learning goals. As it is impossible to have the same learning path that suits different learners, the Curriculum Sequencing problem (CS) consists of the generation of a personalized learning path for each…
Structure problems in the analog computation; Problemes de structure dans le calcul analogique
Energy Technology Data Exchange (ETDEWEB)
Braffort, P L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1957-07-01
The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)
Structure problems in the analog computation; Problemes de structure dans le calcul analogique
Energy Technology Data Exchange (ETDEWEB)
Braffort, P.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1957-07-01
The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)
International Nuclear Information System (INIS)
Dodds, H.L. Jr.
1977-01-01
An overview of the recent accomplishments of the Computational Benchmark Problems Committee of the American Nuclear Society Mathematics and Computation Division is presented. Solutions of computational benchmark problems in the following eight areas are presented and discussed: (a) high-temperature gas-cooled reactor neutronics, (b) pressurized water reactor (PWR) thermal hydraulics, (c) PWR neutronics, (d) neutron transport in a cylindrical ''black'' rod, (e) neutron transport in a boiling water reactor (BWR) rod bundle, (f) BWR transient neutronics with thermal feedback, (g) neutron depletion in a heavy water reactor, and (h) heavy water reactor transient neutronics. It is concluded that these problems and solutions are of considerable value to the nuclear industry because they have been and will continue to be useful in the development, evaluation, and verification of computer codes and numerical-solution methods
Social balance as a satisfiability problem of computer science.
Radicchi, Filippo; Vilone, Daniele; Yoon, Sooeyon; Meyer-Ortmanns, Hildegard
2007-02-01
Reduction of frustration was the driving force in an approach to social balance as it was recently considered by Antal [T. Antal, P. L. Krapivsky, and S. Redner, Phys. Rev. E 72, 036121 (2005)]. We generalize their triad dynamics to k-cycle dynamics for arbitrary integer k. We derive the phase structure, determine the stationary solutions, and calculate the time it takes to reach a frozen state. The main difference in the phase structure as a function of k is related to k being even or odd. As a second generalization we dilute the all-to-all coupling as considered by Antal to a random network with connection probability wcomputer science. The phase of social balance in our original interpretation then becomes the phase of satisfaction of all logical clauses in the satisfiability problem. In common to the cases we study, the ideal solution without any frustration always exists, but the question actually is as to whether this solution can be found by means of a local stochastic algorithm within a finite time. The answer depends on the choice of parameters. After establishing the mapping between the two classes of models, we generalize the social-balance problem to a diluted network topology for which the satisfiability problem is usually studied. On the other hand, in connection with the satisfiability problem we generalize the random local algorithm to a p-random local algorithm, including a parameter p that corresponds to the propensity parameter in the social balance problem. The qualitative effect of the inclusion of this parameter is a bias towards the optimal solution and a reduction of the needed simulation time.
Computable majorants of the limit load in Hencky's plasticity problems
Czech Academy of Sciences Publication Activity Database
Repin, S.; Sysala, Stanislav; Haslinger, Jaroslav
2018-01-01
Roč. 75, č. 1 (2018), s. 199-217 ISSN 0898-1221 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : computable bounds * divergence free fields * Hencky's plasticity * limit load * penalization Subject RIV: BA - General Mathematics Impact factor: 1.531, year: 2016 http://www. science direct.com/ science /article/pii/S0898122117305552
NEWBOX: A computer program for parameter estimation in diffusion problems
International Nuclear Information System (INIS)
Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.
1989-01-01
In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions
International Nuclear Information System (INIS)
Blochwitz, M.; Kretzschmar, F.; Rattke, R.
1985-01-01
Non-destructive determination of material characteristics such as nilductility transition temperature is of high importance in component monitoring during long-term operation. An attempt has been made to obtain characteristics correlating with mechanico-technological material characteristics by both acoustic resonance through magnetization (ARDM) and acoustic emission analysis in Vickers hardness tests. Taking into account the excitation mechanism of acoustic emission generation, which has a quasistationary stochastic character in a.c. magnetization and a transient nature in hardness testing, a microcomputerized device has been constructed for frequency analysis of the body sound level in ARDM evaluation and for measuring the pulse sum and/or pulse rate during indentation of the test specimen in hardness evaluation. Prerequisite for evaluating the measured values is the knowledge of the frequency dependence of the sensors and the instrument system. The results obtained are presented. (author)
An algorithm to compute a rule for division problems with multiple references
Directory of Open Access Journals (Sweden)
Sánchez Sánchez, Francisca J.
2012-01-01
Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.
Standard problems to evaluate soil structure interaction computer codes
International Nuclear Information System (INIS)
Miller, C.A.; Costantino, C.J.; Philippacopoulos, A.J.
1979-01-01
The seismic response of nuclear power plant structures is often calculated using lumped parameter methods. A finite element model of the structure is coupled to the soil with a spring-dashpot system used to represent the interaction process. The parameters of the interaction model are based on analytic solutions to simple problems which are idealizations of the actual problems of interest. The objective of the work reported in this paper is to compare predicted responses using the standard lumped parameter models with experimental data. These comparisons are shown to be good for a fairly uniform soil system and for loadings which do not result in nonlinear interaction effects such as liftoff. 7 references, 7 figures
To the problem of reliability standardization in computer-aided manufacturing at NPP units
International Nuclear Information System (INIS)
Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.
1989-01-01
The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated
Particular application of methods of AdaBoost and LBP to the problems of computer vision
Волошин, Микола Володимирович
2012-01-01
The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.
Solving overvoltage protection problems by means of an analogue computer
Energy Technology Data Exchange (ETDEWEB)
Stephanides, N
1964-03-21
The importance of improving overvoltage protection and reducing insulation level for voltages of 525 and 765 kV is fully realized. A digital computer may be used to determine, according to the Bergson procedure, the voltage distribution at different points of a given network but this procedure is very time-wasting. An analogue simulation is described, which, by giving an instantaneous display of the overvoltage wave on the screen of a cathode ray oscillograph, is better suited for the overvoltage protection study and satisfies also the conditions related to wave reproducibility. The method of simulating inductors, capacitors, and lightning arrestors (by using transistors) is shown and special emphasis is put on the surge generator analogue for which thyration tubes are used in order to get a linear front-increase of the impulse testing wave. The results obtained are accurate within 1 to 2% as compared with calculated values. Ten figures and seven references are given.
Data science in R a case studies approach to computational reasoning and problem solving
Nolan, Deborah
2015-01-01
Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar
USING CLOUD COMPUTING IN SOLVING THE PROBLEMS OF LOGIC
Directory of Open Access Journals (Sweden)
Pavlo V. Mykytenko
2017-02-01
Full Text Available The article provides an overview of the most popular cloud services, in particular those which have their complete office suites, the basic functional characteristics and highlights the advantages and disadvantages of cloud services in the educational process. It was made a comparative analysis of the spreadsheets that are in office suites such cloud services like Zoho Office Suite, Microsoft Office 365 and Google Docs. On the basis of the research and the findings it was suggested the best cloud services for use in the educational process. The possibility of using spreadsheets in the study of logic, from creating formulas that implement logical operations, the creation of means of automation of problem solving process was considered.
New results on classical problems in computational geometry in the plane
DEFF Research Database (Denmark)
Abrahamsen, Mikkel
In this thesis, we revisit three classical problems in computational geometry in the plane. An obstacle that often occurs as a subproblem in more complicated problems is to compute the common tangents of two disjoint, simple polygons. For instance, the common tangents turn up in problems related...... to visibility, collision avoidance, shortest paths, etc. We provide a remarkably simple algorithm to compute all (at most four) common tangents of two disjoint simple polygons. Given each polygon as a read-only array of its corners in cyclic order, the algorithm runs in linear time and constant workspace...... and is the first to achieve the two complexity bounds simultaneously. The set of common tangents provides basic information about the convex hulls of the polygons—whether they are nested, overlapping, or disjoint—and our algorithm thus also decides this relationship. One of the best-known problems in computational...
Sampling from a polytope and hard-disk Monte Carlo
International Nuclear Information System (INIS)
Kapfer, Sebastian C; Krauth, Werner
2013-01-01
The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation
De-Deus, Gustavo; Marins, Juliana; Neves, Aline de Almeida; Reis, Claudia; Fidel, Sandra; Versiani, Marco A; Alves, Haimon; Lopes, Ricardo Tadeu; Paciornik, Sidnei
2014-02-01
The accumulation of debris occurs after root canal preparation procedures specifically in fins, isthmus, irregularities, and ramifications. The aim of this study was to present a step-by-step description of a new method used to longitudinally identify, measure, and 3-dimensionally map the accumulation of hard-tissue debris inside the root canal after biomechanical preparation using free software for image processing and analysis. Three mandibular molars presenting the mesial root with a large isthmus width and a type II Vertucci's canal configuration were selected and scanned. The specimens were assigned to 1 of 3 experimental approaches: (1) 5.25% sodium hypochlorite + 17% EDTA, (2) bidistilled water, and (3) no irrigation. After root canal preparation, high-resolution scans of the teeth were accomplished, and free software packages were used to register and quantify the amount of accumulated hard-tissue debris in either canal space or isthmus areas. Canal preparation without irrigation resulted in 34.6% of its volume filled with hard-tissue debris, whereas the use of bidistilled water or NaOCl followed by EDTA showed a reduction in the percentage volume of debris to 16% and 11.3%, respectively. The closer the distance to the isthmus area was the larger the amount of accumulated debris regardless of the irrigating protocol used. Through the present method, it was possible to calculate the volume of hard-tissue debris in the isthmuses and in the root canal space. Free-software packages used for image reconstruction, registering, and analysis have shown to be promising for end-user application. Copyright © 2014. Published by Elsevier Inc.
Banks, H T; Holm, Kathleen; Robbins, Danielle
2010-11-01
We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.
Ergul, Ozgur
2014-01-01
The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red
TRUMP3-JR: a finite difference computer program for nonlinear heat conduction problems
International Nuclear Information System (INIS)
Ikushima, Takeshi
1984-02-01
Computer program TRUMP3-JR is a revised version of TRUMP3 which is a finite difference computer program used for the solution of multi-dimensional nonlinear heat conduction problems. Pre- and post-processings for input data generation and graphical representations of calculation results of TRUMP3 are avaiable in TRUMP3-JR. The calculation equations, program descriptions and user's instruction are presented. A sample problem is described to demonstrate the use of the program. (author)
Xie, Zongtang; Xu, Jiuping; Wu, Zhibin
2017-02-01
Earthquake exposure has often been associated with psychological distress. However, little is known about the cumulative effect of exposure to two earthquakes on psychological distress and in particular, the effect on the development of post-traumatic stress disorder (PTSD), anxiety and depression disorders. This study explored the effect of exposure on mental health outcomes after a first earthquake and again after a second earthquake. A population-based mental health survey using self-report questionnaires was conducted on 278 people in the hard-hit areas of Lushan and Baoxing Counties 13-16 months after the Wenchuan earthquake (Sample 1). 191 of these respondents were evaluated again 8-9 months after the Lushan earthquake (Sample 2), which struck almost 5 years after the Wenchuan earthquake. In Sample 1, the prevalence rates for PTSD, anxiety and depression disorders were 44.53, 54.25 and 51.82%, respectively, and in Sample 2 the corresponding rates were 27.27, 38.63 and 36.93%. Females, the middle-aged, those of Tibetan nationality, and people who reported fear during the earthquake were at an increased risk of experiencing post-traumatic symptoms. Although the incidence of PTSD, anxiety and depression disorders decreased from Sample 1 to Sample 2, the cumulative effect of exposure to two earthquakes on mental health problems was serious in the hard-hit areas. Therefore, it is important that psychological counseling be provided for earthquake victims, and especially those exposed to multiple earthquakes.
Chen, Chiu-Jung; Liu, Pei-Lin
2007-01-01
This study evaluated the effects of a personalized computer-assisted mathematics problem-solving program on the performance and attitude of Taiwanese fourth grade students. The purpose of this study was to determine whether the personalized computer-assisted program improved student performance and attitude over the nonpersonalized program.…
Basic technological aspects and optimization problems in X-ray computed tomography (C.T.)
International Nuclear Information System (INIS)
Allemand, R.
1987-01-01
The current status and future prospects of physical performance are analysed and the optimization problems are approached for X-ray computed tomography. It is concluded that as long as clinical interest in computed tomography continues, technical advances can be expected in the near future to improve the density resolution, the spatial resolution and the X-ray exposure time. (Auth.)
Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.
Steinberg, Esther R.; And Others
1985-01-01
Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…
Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-01-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…
Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate
Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K
2014-01-01
Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology we...
Frusawa, Hiroshi
2014-05-01
A coarse-grained system of one-dimensional (1D) hard spheres (HSs) is created using the Delaunay tessellation, which enables one to define the quasi-0D state. It is found from comparing the quasi-0D and 1D free energy densities that a frozen state due to the emergence of quasi-0D HSs is thermodynamically more favorable than fluidity with a large-scale heterogeneity above crossover volume fraction of ϕc=e/(1+e)=0.731⋯ , at which the total entropy of the 1D state vanishes. The Delaunay-based lattice mapping further provides a similarity between the dense HS system above ϕc and the jamming limit in the car parking problem.
International Nuclear Information System (INIS)
Frusawa, Hiroshi
2014-01-01
A coarse-grained system of one-dimensional (1D) hard spheres (HSs) is created using the Delaunay tessellation, which enables one to define the quasi-0D state. It is found from comparing the quasi-0D and 1D free energy densities that a frozen state due to the emergence of quasi-0D HSs is thermodynamically more favorable than fluidity with a large-scale heterogeneity above crossover volume fraction of ϕ c =e/(1+e)=0.731⋯ , at which the total entropy of the 1D state vanishes. The Delaunay-based lattice mapping further provides a similarity between the dense HS system above ϕ c and the jamming limit in the car parking problem.
Lee, Young-Jin
2017-01-01
Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…
On a class of O(n²) problems in computational geometry
Gajentaan, A.; Overmars, M.H.
1993-01-01
There are many problems in computational geometry for which the best know algorithms take time (n2) (or more) in the worst case while only very low lower bounds are known. In this paper we describe a large class of problems for which we prove that they are all at least as dicult as the following
Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems
Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen
1999-01-01
We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent
COYOTE: a finite element computer program for nonlinear heat conduction problems
International Nuclear Information System (INIS)
Gartling, D.K.
1978-06-01
COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program
Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Powell, Sarah R.; Seethaler, Pamela M.; Capizzi, Andrea M.; Schatschneider, Christopher; Fletcher, Jack M.
2006-01-01
The purpose of this study was to examine the cognitive correlates of RD-grade skill in arithmetic, algorithmic computation, and arithmetic word problems. Third graders (N = 312) were measured on language, nonverbal problem solving, concept formation, processing speed, long-term memory, working memory, phonological decoding, and sight word…
Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.
Steinberg, Esther R.; And Others
This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…
The computer-aided design of a servo system as a multiple-criteria decision problem
Udink ten Cate, A.J.
1986-01-01
This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies
Study and application of Dot 3.5 computer code in radiation shielding problems
International Nuclear Information System (INIS)
Otto, A.C.; Mendonca, A.G.; Maiorino, J.R.
1983-01-01
The application of nuclear transportation code S sub(N), Dot 3.5, to radiation shielding problems is revised. Aiming to study the better available option (convergence scheme, calculation mode), of DOT 3.5 computer code to be applied in radiation shielding problems, a standard model from 'Argonne Code Center' was selected and a combination of several calculation options to evaluate the accuracy of the results and the computational time was used, for then to select the more efficient option. To illustrate the versatility and efficacy in the application of the code for tipical shielding problems, the streaming neutrons calculation along a sodium coolant channel is ilustrated. (E.G.) [pt
A Study of the Correlation between Computer Games and Adolescent Behavioral Problems.
Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud
2013-01-01
Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach's Youth Self-Report (YSR). The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students' place of living and their parents' job, and using computer games. Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents.
A Study of the Correlation between Computer Games and Adolescent Behavioral Problems
Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud
2013-01-01
Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach’s Youth Self-Report (YSR). Findings The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students’ place of living and their parents’ job, and using computer games. Conclusion Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents. PMID:24494157
COMPUTER TOOLS OF DYNAMIC MATHEMATIC SOFTWARE AND METHODICAL PROBLEMS OF THEIR USE
Directory of Open Access Journals (Sweden)
Olena V. Semenikhina
2014-08-01
Full Text Available The article presents results of analyses of standard computer tools of dynamic mathematic software which are used in solving tasks, and tools on which the teacher can support in the teaching of mathematics. Possibility of the organization of experimental investigating of mathematical objects on the basis of these tools and the wording of new tasks on the basis of the limited number of tools, fast automated check are specified. Some methodological comments on application of computer tools and methodological features of the use of interactive mathematical environments are presented. Problems, which are arising from the use of computer tools, among which rethinking forms and methods of training by teacher, the search for creative problems, the problem of rational choice of environment, check the e-solutions, common mistakes in the use of computer tools are selected.
Utilizing of computational tools on the modelling of a simplified problem of neutron shielding
Energy Technology Data Exchange (ETDEWEB)
Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico]. E-mails: fsrlessa@gmail.com; gmplatt@iprj.uerj.br; halves@iprj.uerj.br
2007-07-01
In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)
Utilizing of computational tools on the modelling of a simplified problem of neutron shielding
International Nuclear Information System (INIS)
Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes
2007-01-01
In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)
Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems
International Nuclear Information System (INIS)
Yavuz, Musa
1998-01-01
We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods
Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems
International Nuclear Information System (INIS)
Yavuz, M.
1997-01-01
We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)
Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures
Directory of Open Access Journals (Sweden)
Hossam M. Faheem
2018-02-01
Full Text Available Viewing a computationally-intensive problem as a self-contained challenge with its own hardware, software and scheduling strategies is an approach that should be investigated. We might suggest assigning heterogeneous hardware architectures to solve a problem, while parallel computing paradigms may play an important role in writing efficient code to solve the problem; moreover, the scheduling strategies may be examined as a possible solution. Depending on the problem complexity, finding the best possible solution using an integrated infrastructure of hardware, software and scheduling strategy can be a complex job. Developing and using ontologies and reasoning techniques play a significant role in reducing the complexity of identifying the components of such integrated infrastructures. Undertaking reasoning and inferencing regarding the domain concepts can help to find the best possible solution through a combination of hardware, software and scheduling strategies. In this paper, we present an ontology and show how we can use it to solve computationally-intensive problems from various domains. As a potential use for the idea, we present examples from the bioinformatics domain. Validation by using problems from the Elastic Optical Network domain has demonstrated the flexibility of the suggested ontology and its suitability for use with any other computationally-intensive problem domain.
Engineering Courses on Computational Thinking Through Solving Problems in Artificial Intelligence
Directory of Open Access Journals (Sweden)
Piyanuch Silapachote
2017-09-01
Full Text Available Computational thinking sits at the core of every engineering and computing related discipline. It has increasingly emerged as its own subject in all levels of education. It is a powerful cornerstone for cognitive development, creative problem solving, algorithmic thinking and designs, and programming. How to effectively teach computational thinking skills poses real challenges and creates opportunities. Targeting entering computer science and engineering undergraduates, we resourcefully integrate elements from artificial intelligence (AI into introductory computing courses. In addition to comprehension of the essence of computational thinking, practical exercises in AI enable inspirations of collaborative problem solving beyond abstraction, logical reasoning, critical and analytical thinking. Problems in machine intelligence systems intrinsically connect students to algorithmic oriented computing and essential mathematical foundations. Beyond knowledge representation, AI fosters a gentle introduction to data structures and algorithms. Focused on engaging mental tool, a computer is never a necessity. Neither coding nor programming is ever required. Instead, students enjoy constructivist classrooms designed to always be active, flexible, and highly dynamic. Learning to learn and reflecting on cognitive experiences, they rigorously construct knowledge from collectively solving exciting puzzles, competing in strategic games, and participating in intellectual discussions.
Energy Technology Data Exchange (ETDEWEB)
Seeliger, A [Technische Hochschule Aachen (Germany)
1990-01-01
Analysis of the planning activity in the planning department of German hard coal mines have shown that in some branches of the planning process productivity and creativity of the involved experts can be increased, potentials for rationalization be opened up and the cooperation between different engineering disciplines be improved by using computer network systems in combination with graphic systems. This paper reports about the computer-supported planning system 'Grube', which has been developed at the RWTH (technical university) Aachen, and its applications in mine surveying, electro-technical and mechanical planning as well as in the planning of ventilation systems and detailed mine planning. The software module GRUBE-W, which will be in future the centre of the working place for the mine ventilation planning of the Ruhrkohle AG, is discussed in detail. (orig.).
Awange, Joseph L
2004-01-01
While preparing and teaching 'Introduction to Geodesy I and II' to - dergraduate students at Stuttgart University, we noticed a gap which motivated the writing of the present book: Almost every topic that we taughtrequiredsomeskillsinalgebra,andinparticular,computeral- bra! From positioning to transformation problems inherent in geodesy and geoinformatics, knowledge of algebra and application of computer algebra software were required. In preparing this book therefore, we haveattemptedtoputtogetherbasicconceptsofabstractalgebra which underpin the techniques for solving algebraic problems. Algebraic c- putational algorithms useful for solving problems which require exact solutions to nonlinear systems of equations are presented and tested on various problems. Though the present book focuses mainly on the two ?elds,theconceptsand techniquespresented hereinarenonetheless- plicable to other ?elds where algebraic computational problems might be encountered. In Engineering for example, network densi?cation and robo...
A Computational Analysis of the Traveling Salesman and Cutting Stock Problems
Directory of Open Access Journals (Sweden)
Gracia María D.
2015-01-01
Full Text Available The aim of this article is to perform a computational study to analyze the impact of formulations, and the solution strategy on the algorithmic performance of two classical optimization problems: the traveling salesman problem and the cutting stock problem. In order to assess the algorithmic performance on both problems three dependent variables were used: solution quality, computing time and number of iterations. The results are useful for choosing the solution approach to each specific problem. In the STSP, the results demonstrate that the multistage decision formulation is better than the conventional formulations, by solving 90.47% of the instances compared with MTZ (76.19% and DFJ (14.28%. The results of the CSP demonstrate that the cutting patterns formulation is better than the standard formulation with symmetry breaking inequalities, when the objective function is to minimize the loss of trim when cutting the rolls.
Musculoskeletal Problems Associated with University Students Computer Users: A Cross-Sectional Study
Directory of Open Access Journals (Sweden)
Rakhadani PB
2017-07-01
Full Text Available While several studies have examined the prevalence and correlates of musculoskeletal problems among university students, scanty information exists in South African context. The objective of this study was to determine the prevalence, causes and consequences of musculoskeletal problems among University of Venda students’ computer users. This cross-sectional study involved 694 university students at the University of Venda. A self-designed questionnaire was used to collect information on the sociodemographic characteristics, problems associated with computer users, and causes of musculoskeletal problems associated with computer users. The majority (84.6% of the participants use computer for internet, wording processing (20.3%, and games (18.7%. The students reported neck pain when using computer (52.3%; shoulder (47.0%, finger (45.0%, lower back (43.1%, general body pain (42.9%, elbow (36.2%, wrist (33.7%, hip and foot (29.1% and knee (26.2%. Reported causes of musculoskeletal pains associated with computer usage were: sitting position, low chair, a lot of time spent on computer, uncomfortable laboratory chairs, and stressfulness. Eye problems (51.9%, muscle cramp (344.0%, headache (45.3%, blurred vision (38.0%, feeling of illness (39.9% and missed lectures (29.1% were consequences of musculoskeletal problems linked to computer use. The majority of students reported having mild pain (43.7%, moderate (24.2%, and severe (8.4% pains. Years of computer use were significantly associated with neck, shoulder and wrist pain. Using computer for internet was significantly associated with neck pain (OR=0.60; 95% CI 0.40-0.93; games: neck (OR=0.60; 95% CI 0.40-0.85 and hip/foot (OR=0.60; CI 95% 0.40-0.92, programming for elbow (OR= 1.78; CI 95% 1.10-2.94 and wrist (OR=2.25; CI 95% 1.36-3.73, while word processing was significantly associated with lower back (OR=1.45; CI 95% 1.03-2.04. Undergraduate study had a significant association with elbow pain (OR=2
Solving Large-Scale Computational Problems Using Insights from Statistical Physics
Energy Technology Data Exchange (ETDEWEB)
Selman, Bart [Cornell University
2012-02-29
Many challenging problems in computer science and related fields can be formulated as constraint satisfaction problems. Such problems consist of a set of discrete variables and a set of constraints between those variables, and represent a general class of so-called NP-complete problems. The goal is to find a value assignment to the variables that satisfies all constraints, generally requiring a search through and exponentially large space of variable-value assignments. Models for disordered systems, as studied in statistical physics, can provide important new insights into the nature of constraint satisfaction problems. Recently, work in this area has resulted in the discovery of a new method for solving such problems, called the survey propagation (SP) method. With SP, we can solve problems with millions of variables and constraints, an improvement of two orders of magnitude over previous methods.
Digital Repository Service at National Institute of Oceanography (India)
Charhate, S.B.; Deo, M.C.; SanilKumar, V.
. Owing to the complex real sea conditions, such methods may not always yield satisfactory results. This paper discusses a few alternative approaches based on the soft computing tools of artificial neural networks (ANNs) and genetic programming (GP...
Stinson, Michael S; Elliot, Lisa B; Easton, Donna
2014-04-01
Four groups of postsecondary students, 25 who were deaf/hard of hearing (D/HH), 25 with a learning disability, 25 who were English language learners (ELLs), and 25 without an identified disability studied notes that included text and graphical information based on a physics or a marine biology lecture. The latter 3 groups were normally hearing. All groups had higher scores on post- than on pretests for each lecture, with each group showing generally similar gains in amount of material learned from the pretest to the posttest. For each lecture, the D/HH students scored lower on the pre- and posttests than the other 3 groups of participants. Results indicated that students acquired measurable amounts of information from studying these types of notes for relatively short periods and that the notes have equal potential to support the acquisition of information by each of these groups of students.
Directory of Open Access Journals (Sweden)
Ivan H. Lenchuk
2014-02-01
Full Text Available Presented article concerns construction problems in plane geometry. Solved the problem of the formation of students' stereotypes efficient, economical in time visual representation of algorithms for solving problems on the modern computer screens. Used universal author’s method of fragmented typing tasks on the method of circles. Allocated rod-type problem with its subsequent filling with ingredients. Previously developed educational software (partially, GeoGebra ensure optimal realization of the construction. Their dynamic characteristics and constructive capabilities - quality visual- shaped stages of "evidence" and "research".
A review on economic emission dispatch problems using quantum computational intelligence
Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.
2016-11-01
Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.
The situation of computer utilization in radiation therapy in Japan and other countries and problems
International Nuclear Information System (INIS)
Onai, Yoshio
1981-01-01
The uses of computers in radiation therapy are various, such as radiation dose calculation, clinical history management, radiotherapeutical instrument automation and biological model. To grasp the situation in this field, a survey by questionnaire was carried out internationally at the time of the 7th International Conference on the Use of Computers in Radiation Therapy held in Kawasaki and Tokyo in September, 1980. The surveyed nations totaled 21 including Japan; the number of facilities answered were 203 in Japan and 111 in other countries, and the period concerned is December, 1979, to September, 1980. The results of the survey are described as follows: areas of use of computers in hospitals, computer utilization in radiation department, computer uses in radiation therapy, and evaluation of radiotherapeutical computer uses and problems. (J.P.N.)
EDDYMULT: a computing system for solving eddy current problems in a multi-torus system
International Nuclear Information System (INIS)
Nakamura, Yukiharu; Ozeki, Takahisa
1989-03-01
A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)
DEFF Research Database (Denmark)
Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille
2015-01-01
and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. RESULTS: Perception of problems related to computer gaming were associated......OBJECTIVES: Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet...... communication are associated with young people's social relations. METHODS: Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89 %, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming...
Vectorization on the star computer of several numerical methods for a fluid flow problem
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
Computer problem-solving coaches for introductory physics: Design and usability studies
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-06-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.
Problems and Issues in Using Computer- Based Support Tools to Enhance 'Soft' Systems Methodologies
Directory of Open Access Journals (Sweden)
Mark Stansfield
2001-11-01
Full Text Available This paper explores the issue of whether computer-based support tools can enhance the use of 'soft' systems methodologies as applied to real-world problem situations. Although work has been carried out by a number of researchers in applying computer-based technology to concepts and methodologies relating to 'soft' systems thinking such as Soft Systems Methodology (SSM, such attempts appear to be still in their infancy and have not been applied widely to real-world problem situations. This paper will highlight some of the problems that may be encountered in attempting to develop computer-based support tools for 'soft' systems methodologies. Particular attention will be paid to an attempt by the author to develop a computer-based support tool for a particular 'soft' systems method of inquiry known as the Appreciative Inquiry Method that is based upon Vickers' notion of 'appreciation' (Vickers, 196S and Checkland's SSM (Checkland, 1981. The final part of the paper will explore some of the lessons learnt from developing and applying the computer-based support tool to a real world problem situation, as well as considering the feasibility of developing computer-based support tools for 'soft' systems methodologies. This paper will put forward the point that a mixture of manual and computer-based tools should be employed to allow a methodology to be used in an unconstrained manner, but the benefits provided by computer-based technology should be utilised in supporting and enhancing the more mundane and structured tasks.
A Study of the Correlation between Computer Games and Adolescent Behavioral Problems
Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud
2013-01-01
Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods Th...
A DNA Computing Model for the Graph Vertex Coloring Problem Based on a Probe Graph
Directory of Open Access Journals (Sweden)
Jin Xu
2018-02-01
Full Text Available The biggest bottleneck in DNA computing is exponential explosion, in which the DNA molecules used as data in information processing grow exponentially with an increase of problem size. To overcome this bottleneck and improve the processing speed, we propose a DNA computing model to solve the graph vertex coloring problem. The main points of the model are as follows: ① The exponential explosion problem is solved by dividing subgraphs, reducing the vertex colors without losing the solutions, and ordering the vertices in subgraphs; and ② the bio-operation times are reduced considerably by a designed parallel polymerase chain reaction (PCR technology that dramatically improves the processing speed. In this article, a 3-colorable graph with 61 vertices is used to illustrate the capability of the DNA computing model. The experiment showed that not only are all the solutions of the graph found, but also more than 99% of false solutions are deleted when the initial solution space is constructed. The powerful computational capability of the model was based on specific reactions among the large number of nanoscale oligonucleotide strands. All these tiny strands are operated by DNA self-assembly and parallel PCR. After thousands of accurate PCR operations, the solutions were found by recognizing, splicing, and assembling. We also prove that the searching capability of this model is up to O(359. By means of an exhaustive search, it would take more than 896 000 years for an electronic computer (5 × 1014 s−1 to achieve this enormous task. This searching capability is the largest among both the electronic and non-electronic computers that have been developed since the DNA computing model was proposed by Adleman’s research group in 2002 (with a searching capability of O(220. Keywords: DNA computing, Graph vertex coloring problem, Polymerase chain reaction
Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems
Directory of Open Access Journals (Sweden)
Tsay Tswn-Syau
2017-01-01
Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.
Human-computer interfaces applied to numerical solution of the Plateau problem
Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério
2015-09-01
In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology
A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Abhishek Bhatia
2015-03-01
Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.
Distributed Systems: The Hard Problems
CERN. Geneva
2015-01-01
**Nicholas Bellerophon** works as a client services engineer at Basho Technologies, helping customers setup and run distributed systems at scale in the wild. He has also worked in massively multiplayer games, and recently completed a live scalable simulation engine. He is an avid TED-watcher with interests in many areas of the arts, science, and engineering, including of course high-energy physics.
Directory of Open Access Journals (Sweden)
A.N. Khomchenko
2016-08-01
Full Text Available The paper considers the problem of bi-cubic interpolation on the final element of serendipity family. With cognitive-graphical analysis the rigid model of Ergatoudis, Irons and Zenkevich (1968 compared with alternative models, obtained by the methods: direct geometric design, a weighted averaging of the basis polynomials, systematic generation of bases (advanced Taylor procedure. The emphasis is placed on the phenomenon of "gravitational repulsion" (Zenkevich paradox. The causes of rising of inadequate physical spectra nodal loads on serendipity elements of higher orders are investigated. Soft modeling allows us to build a lot of serendipity elements of bicubic interpolation, and you do not even need to know the exact form of the rigid model. The different interpretations of integral characteristics of the basis polynomials: geometrical, physical, probability are offered. Under the soft model in the theory of interpolation of function of two variables implies the model amenable to change through the choice of basis. Such changes in the family of Lagrangian finite elements of higher orders are excluded (hard simulation. Standard models of serendipity family (Zenkevich were also tough. It was found that the "responsibility" for the rigidity of serendipity model rests on ruled surfaces (zero Gaussian curvature - conoids that predominate in the base set. Cognitive portraits zero lines of standard serendipity surfaces suggested that in order to "mitigate" of serendipity pattern conoid should better be replaced by surfaces of alternating Gaussian curvature. The article shows the alternative (soft bases of serendipity models. The work is devoted to solving scientific and technological problems aimed at the creation, dissemination and use of cognitive computer graphics in teaching and learning. The results are of interest to students of specialties: "Computer Science and Information Technologies", "System Analysis", "Software Engineering", as well as
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-10-23
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
DEFF Research Database (Denmark)
Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan
2012-01-01
The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...
Baker, Nancy A; Aufman, Elyse L; Poole, Janet L
2012-01-01
We identified the extent of the need for interventions and assistive technology to prevent computer use problems in people with systemic sclerosis (SSc) and the accommodation strategies they use to alleviate such problems. Respondents were recruited through the Scleroderma Foundation. Twenty-seven people with SSc who used a computer and reported difficulty in working completed the Computer Problems Survey. All but 1 of the respondents reported one problem with at least one equipment type. The highest number of respondents reported problems with keyboards (88%) and chairs (85%). More than half reported discomfort in the past month associated with the chair, keyboard, and mouse. Respondents used a variety of accommodation strategies. Many respondents experienced problems and discomfort related to computer use. The characteristic symptoms of SSc may contribute to these problems. Occupational therapy interventions for computer use problems in clients with SSc need to be tested. Copyright © 2012 by the American Occupational Therapy Association, Inc.
Directory of Open Access Journals (Sweden)
Jelmer P Borst
Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.
Bolemon, Jay S.; Etzold, David J.
1974-01-01
Discusses the use of a small computer to solve self-consistent field problems of one-dimensional systems of two or more interacting particles in an elementary quantum mechanics course. Indicates that the calculation can serve as a useful introduction to the iterative technique. (CC)
Cost-effective computations with boundary interface operators in elliptic problems
International Nuclear Information System (INIS)
Khoromskij, B.N.; Mazurkevich, G.E.; Nikonov, E.G.
1993-01-01
The numerical algorithm for fast computations with interface operators associated with the elliptic boundary value problems (BVP) defined on step-type domains is presented. The algorithm is based on the asymptotically almost optimal technique developed for treatment of the discrete Poincare-Steklov (PS) operators associated with the finite-difference Laplacian on rectangles when using the uniform grid with a 'displacement by h/2'. The approach can be regarded as an extension of the method proposed for the partial solution of the finite-difference Laplace equation to the case of displaced grids and mixed boundary conditions. It is shown that the action of the PS operator for the Dirichlet problem and mixed BVP can be computed with expenses of the order of O(Nlog 2 N) both for arithmetical operations and computer memory needs, where N is the number of unknowns on the rectangle boundary. The single domain algorithm is applied to solving the multidomain elliptic interface problems with piecewise constant coefficients. The numerical experiments presented confirm almost linear growth of the computational costs and memory needs with respect to the dimension of the discrete interface problem. 14 refs., 3 figs., 4 tabs
Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille; Pedersen, Trine Pagh; Nielsen, Line; Madsen, Katrine Rich; Holstein, Bjørn E
2015-02-01
Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet communication are associated with young people's social relations. Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89%, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. Perception of problems related to computer gaming were associated with almost all aspects of poor social relations among boys. Among girls, an association was only seen for bullying. For both boys and girls, perceived problems related to Internet use were associated with bullying only. Although the study is cross-sectional, the findings suggest that computer gaming and Internet use may be harmful to young people's social relations.
Large scale inverse problems computational methods and applications in the earth sciences
Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan
2013-01-01
This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
ONTOLOGY OF COMPUTATIONAL EXPERIMENT ORGANIZATION IN PROBLEMS OF SEARCHING AND SORTING
Directory of Open Access Journals (Sweden)
A. Spivakovsky
2011-05-01
Full Text Available Ontologies are a key technology of semantic processing of knowledge. We examine a methodology of ontology’s usage for the organization of computational experiment in problems of searching and sorting in studies of the course "Basics of algorithms and programming".
Barendregt, W.; Bekker, M.M.
2006-01-01
This article describes the development and assessment of a coding scheme for finding both usability and fun problems through observations of young children playing computer games during user tests. The proposed coding scheme is based on an existing list of breakdown indication types of the detailed
Energy Technology Data Exchange (ETDEWEB)
Falikov, A A; Vakhrushev, V V; Kuul, V S; Samoilov, O B; Tarasov, G I [OKBM, Nizhny Novgorod (Russian Federation)
1997-09-01
The paper briefly reviews the specific thermal-hydraulic problems for AST-type NHRs, the experimental investigations that have been carried out in the RF, and the design procedures and computer codes used for AST-500 thermohydraulic characteristics and safety validation. (author). 13 refs, 10 figs, 1 tab.
Lautenschlager, Stephan; Bright, Jen A; Rayfield, Emily J
2014-04-01
Gross dissection has a long history as a tool for the study of human or animal soft- and hard-tissue anatomy. However, apart from being a time-consuming and invasive method, dissection is often unsuitable for very small specimens and often cannot capture spatial relationships of the individual soft-tissue structures. The handful of comprehensive studies on avian anatomy using traditional dissection techniques focus nearly exclusively on domestic birds, whereas raptorial birds, and in particular their cranial soft tissues, are essentially absent from the literature. Here, we digitally dissect, identify, and document the soft-tissue anatomy of the Common Buzzard (Buteo buteo) in detail, using the new approach of contrast-enhanced computed tomography using Lugol's iodine. The architecture of different muscle systems (adductor, depressor, ocular, hyoid, neck musculature), neurovascular, and other soft-tissue structures is three-dimensionally visualised and described in unprecedented detail. The three-dimensional model is further presented as an interactive PDF to facilitate the dissemination and accessibility of anatomical data. Due to the digital nature of the data derived from the computed tomography scanning and segmentation processes, these methods hold the potential for further computational analyses beyond descriptive and illustrative proposes. © 2013 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.
Zhang, Shao-Liang; Imamura, Toshiyuki; Yamamoto, Yusaku; Kuramashi, Yoshinobu; Hoshi, Takeo
2017-01-01
This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.
Upgrade of RMS computers for Y2K problems in RX and related building of HANARO
International Nuclear Information System (INIS)
Kim, Jung Taek; Kim, J. T.; Ham, C. S.; Kim, C. H.; Lee, Bong Jae; Jae, Yoo Kyung
2000-08-01
The Objectives of this Project are as follows : - To resolve the problems of Y2k and operation and maintenance of RMS Computers in RX and related Building of HANARO - To upgrade 486 PC to Pentium II PC - To make Windows NT-Based platform for aspects of user - To make an information structure for radiation using ireless and network devices The Contents of the Project are as follows : - To make Windows NT-Based platform for Radiation Monitoring System - To make Software Platform and Environment for the developing the application program - To design and implement Database Structure - To implement RS232c communication program between local indicators and scanning computers - To implement IEEE 802.3 ethernet communication program between scanning computers and RMTs - To implement user interface for radiation monitoring - To test and inspect Y2k problems
Upgrade of RMS computers for Y2K problems in RX and related building of HANARO
Energy Technology Data Exchange (ETDEWEB)
Kim, Jung Taek; Kim, J. T.; Ham, C. S.; Kim, C. H.; Lee, Bong Jae; Jae, Yoo Kyung
2000-08-01
The Objectives of this Project are as follows : - To resolve the problems of Y2k and operation and maintenance of RMS Computers in RX and related Building of HANARO - To upgrade 486 PC to Pentium II PC - To make Windows NT-Based platform for aspects of user - To make an information structure for radiation using ireless and network devices The Contents of the Project are as follows : - To make Windows NT-Based platform for Radiation Monitoring System - To make Software Platform and Environment for the developing the application program - To design and implement Database Structure - To implement RS232c communication program between local indicators and scanning computers - To implement IEEE 802.3 ethernet communication program between scanning computers and RMTs - To implement user interface for radiation monitoring - To test and inspect Y2k problems.
The benefits of computer-generated feedback for mathematics problem solving.
Fyfe, Emily R; Rittle-Johnson, Bethany
2016-07-01
The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.
Proceedings of the International Conference on Soft Computing for Problem Solving
Nagar, Atulya; Pant, Millie; Bansal, Jagdish
2012-01-01
The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2011), held at Roorkee, India. This book is divided into two volumes and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining etc. Particular emphasis is laid on Soft Computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.
Proceedings of the International Conference on Soft Computing for Problem Solving
Nagar, Atulya; Pant, Millie; Bansal, Jagdish
2012-01-01
The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2011), held at Roorkee, India. This book is divided into two volumes and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining etc. Particular emphasis is laid on Soft Computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.
International Nuclear Information System (INIS)
So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.
2007-01-01
We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min
Computer use and vision-related problems among university students in ajman, United arab emirate.
Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K
2014-03-01
The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.
A soft computing-based approach to optimise queuing-inventory control problem
Alaghebandha, Mohammad; Hajipour, Vahid
2015-04-01
In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.
Enticknap, Nicholas
2014-01-01
Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not
Khan, Gufran Sayeed; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The presentation includes grazing incidence X-ray optics, motivation and challenges, mid spatial frequency generation in cylindrical polishing, design considerations for polishing lap, simulation studies and experimental results, future scope, and summary. Topics include current status of replication optics technology, cylindrical polishing process using large size polishing lap, non-conformance of polishin lap to the optics, development of software and polishing machine, deterministic prediction of polishing, polishing experiment under optimum conditions, and polishing experiment based on known error profile. Future plans include determination of non-uniformity in the polishing lap compliance, development of a polishing sequence based on a known error profile of the specimen, software for generating a mandrel polishing sequence, design an development of a flexible polishing lap, and computer controlled localized polishing process.
Desktop Grid Computing with BOINC and its Use for Solving the RND telecommunication Problem
International Nuclear Information System (INIS)
Vega-Rodriguez, M. A.; Vega-Perez, D.; Gomez-Pulido, J. A.; Sanchez-Perez, J. M.
2007-01-01
An important problem in mobile/cellular technology is trying to cover a certain geographical area by using the smallest number of radio antennas, and looking for the biggest cover rate. This is the well known Telecommunication problem identified as Radio Network Design (RND). This optimization problem can be solved by bio-inspired algorithms, among other options. In this work we use the PBIL (Population-Based Incremental Learning) algorithm, that has been little studied in this field but we have obtained very good results with it. PBIL is based on genetic algorithms and competitive learning (typical in neural networks), being a population evolution model based on probabilistic models. Due to the high number of configuration parameters of the PBIL, and because we want to test the RND problem with numerous variants, we have used grid computing with BOINC (Berkeley Open Infrastructure for Network Computing). In this way, we have been able to execute thousands of experiments in few days using around 100 computers at the same time. In this paper we present the most interesting results from our work. (Author)
Directory of Open Access Journals (Sweden)
Zhaocai Wang
2015-10-01
Full Text Available The unbalanced assignment problem (UAP is to optimally resolve the problem of assigning n jobs to m individuals (m < n, such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
Hieber, Simone E.; Bikis, Christos; Khimchenko, Anna; Schulz, Georg; Deyhle, Hans; Thalmann, Peter; Chicherova, Natalia; Rack, Alexander; Zdora, Marie-Christine; Zanette, Irene; Schweighauser, Gabriel; Hench, Jürgen; Müller, Bert
2016-10-01
Cell visualization and counting plays a crucial role in biological and medical research including the study of neurodegenerative diseases. The neuronal cell loss is typically determined to measure the extent of the disease. Its characterization is challenging because the cell density and size already differs by more than three orders of magnitude in a healthy cerebellum. Cell visualization is commonly performed by histology and fluorescence microscopy. These techniques are limited to resolve complex microstructures in the third dimension. Phase- contrast tomography has been proven to provide sufficient contrast in the three-dimensional imaging of soft tissue down to the cell level and, therefore, offers the basis for the three-dimensional segmentation. Within this context, a human cerebellum sample was embedded in paraffin and measured in local phase-contrast mode at the beamline ID19 (ESRF, Grenoble, France) and the Diamond Manchester Imaging Branchline I13-2 (Diamond Light Source, Didcot, UK). After the application of Frangi-based filtering the data showed sufficient contrast to automatically identify the Purkinje cells and to quantify their density to 177 cells per mm3 within the volume of interest. Moreover, brain layers were segmented in a region of interest based on edge detection. Subsequently performed histological analysis validated the presence of the cells, which required a mapping from the two- dimensional histological slices to the three-dimensional tomogram. The methodology can also be applied to further tissue types and shows potential for the computational tissue analysis in health and disease.
Calculation of the D-COM blind problem with computer codes PIN and RELA
International Nuclear Information System (INIS)
Pazdera, F.; Barta, O.; Smid, J.
1985-01-01
The results of the blind and post-experimental calculations of the 'D-COM Blind Problem on Fission Gas Release', performed within the framework of the IAEA coordinated research programme for 'The Development of Computer Models for Fuel Element Behaviour in Water Reactors', are presented. The results are compared with experimental data. A sensitivity study shows a possible explanation of some discrepancies between calculated and experimental results during the bump test performed after base irradiation. The calculations were performed with the computer codes PIN and RELA. Some submodels used in the calculations are also described. (author)
Garrido, Jose
2011-01-01
… offers a solid first step into scientific and technical computing for those just getting started. … Through simple examples that are both easy to conceptualize and straightforward to express mathematically (something that isn't trivial to achieve), Garrido methodically guides readers from problem statement and abstraction through algorithm design and basic programming. His approach offers those beginning in a scientific or technical discipline something unique; a simultaneous introduction to programming and computational thinking that is very relevant to the practical application of computin
DEFF Research Database (Denmark)
Iris, Cagatay; Pacino, Dario; Røpke, Stefan
2015-01-01
Most of the operational problems in container terminals are strongly interconnected. In this paper, we study the integrated Berth Allocation and Quay Crane Assignment Problem in seaport container terminals. We will extend the current state-of-the-art by proposing novel set partitioning models....... To improve the performance of the set partitioning formulations, a number of variable reduction techniques are proposed. Furthermore, we analyze the effects of different discretization schemes and the impact of using a time-variant/invariant quay crane allocation policy. Computational experiments show...
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the
DEFF Research Database (Denmark)
Wøhlk, Sanne; Laporte, Gilbert
2017-01-01
The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...... collection in Denmark. Common heuristics for the CARP involve the optimal matching of the odd-degree nodes of a graph. The algorithms used in the comparison include the CPLEX solution of an exact formulation, the LEDA matching algorithm, a recent implementation of the Blossom algorithm, as well as six...
International Nuclear Information System (INIS)
Maor, Uri; Tel Aviv Univ.
1995-09-01
The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab
Solving black box computation problems using expert knowledge theory and methods
International Nuclear Information System (INIS)
Booker, Jane M.; McNamara, Laura A.
2004-01-01
The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation
Melting of polydisperse hard disks
Pronk, S.; Frenkel, D.
2004-01-01
The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find
A Finite-Volume computational mechanics framework for multi-physics coupled fluid-stress problems
International Nuclear Information System (INIS)
Bailey, C; Cross, M.; Pericleous, K.
1998-01-01
Where there is a strong interaction between fluid flow, heat transfer and stress induced deformation, it may not be sufficient to solve each problem separately (i.e. fluid vs. stress, using different techniques or even different computer codes). This may be acceptable where the interaction is static, but less so, if it is dynamic. It is desirable for this reason to develop software that can accommodate both requirements (i.e. that of fluid flow and that of solid mechanics) in a seamless environment. This is accomplished in the University of Greenwich code PHYSICA, which solves both the fluid flow problem and the stress-strain equations in a unified Finite-Volume environment, using an unstructured computational mesh that can deform dynamically. Example applications are given of the work of the group in the metals casting process (where thermal stresses cause elasto- visco-plastic distortion)
Nakeva von Mentzer, Cecilia; Lyxell, Björn; Sahlén, Birgitta; Wass, Malin; Lindgren, Magnus; Ors, Marianne; Kallioinen, Petter; Uhlén, Inger
2013-12-01
Examine deaf and hard of hearing (DHH) children's phonological processing skills in relation to a reference group of children with normal hearing (NH) at two baselines pre intervention. Study the effects of computer-assisted phoneme-grapheme correspondence training in the children. Specifically analyze possible effects on DHH children's phonological processing skills. The study included 48 children who participated in a computer-assisted intervention study, which focuses on phoneme-grapheme correspondence. Children were 5, 6, and 7 years of age. There were 32 DHH children using cochlear implants (CI) or hearing aids (HA), or both in combination, and 16 children with NH. The study had a quasi-experimental design with three test occasions separated in time by four weeks; baseline 1 and 2 pre intervention, and 3 post intervention. Children performed tasks measuring lexical access, phonological processing, and letter knowledge. All children were asked to practice ten minutes per day at home supported by their parents. NH children outperformed DHH children on the majority of tasks. All children improved their accuracy in phoneme-grapheme correspondence and output phonology as a function of the computer-assisted intervention. For the whole group of children, and specifically for children with CI, a lower initial phonological composite score was associated with a larger phonological change between baseline 2 and post intervention. Finally, 18 DHH children, whereof 11 children with CI, showed specific intervention effects on their phonological processing skills, and strong effect sizes for their improved accuracy of phoneme-grapheme correspondence. For some DHH children phonological processing skills are boosted relatively more by phoneme-grapheme correspondence training. This reflects the reciprocal relationship between phonological change and exposure to and manipulations of letters. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An analog computer method for solving flux distribution problems in multi region nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1963-04-15
The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)
An improved computational version of the LTSN method to solve transport problems in a slab
International Nuclear Information System (INIS)
Cardona, Augusto V.; Oliveira, Jose Vanderlei P. de; Vilhena, Marco Tullio de; Segatto, Cynthia F.
2008-01-01
In this work, we present an improved computational version of the LTS N method to solve transport problems in a slab. The key feature relies on the reordering of the set of S N equations. This procedure reduces by a factor of two the task of evaluating the eigenvalues of the matrix associated to SN approximations. We present numerical simulations and comparisons with the ones of the classical LTS N approach. (author)
CSIR Research Space (South Africa)
Wilke, DN
2012-07-01
Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...
Energy Technology Data Exchange (ETDEWEB)
Calkins, Mathew; Gates, D.E.A.; Gates, S. James Jr. [Center for String and Particle Theory, Department of Physics, University of Maryland,College Park, MD 20742-4111 (United States); Golding, William M. [Sensors and Electron Devices Directorate, US Army Research Laboratory,Adelphi, Maryland 20783 (United States)
2015-04-13
Starting with valise supermultiplets obtained from 0-branes plus field redefinitions, valise adinkra networks, and the “Garden Algebra,” we discuss an architecture for algorithms that (starting from on-shell theories and, through a well-defined computation procedure), search for off-shell completions. We show in one dimension how to directly attack the notorious “off-shell auxiliary field” problem of supersymmetry with algorithms in the adinkra network-world formulation.
Solving Problems in Various Domains by Hybrid Models of High Performance Computations
Directory of Open Access Journals (Sweden)
Yurii Rogozhin
2014-03-01
Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.
International Nuclear Information System (INIS)
Gartling, D.K.
1978-04-01
The theoretical background for the finite element computer program, NACHOS, is presented in detail. The NACHOS code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer. A general description of the fluid/thermal boundary value problems treated by the program is described. The finite element method and the associated numerical methods used in the NACHOS code are also presented. Instructions for use of the program are documented in SAND77-1334
SUPPORT OF NEW COMPUTER HARDWARE AT LUCH'S MC and A SYSTEM: PROBLEMS AND A SOLUTION
International Nuclear Information System (INIS)
Fedoseev, Victor; Shanin, Oleg
2009-01-01
Microsoft Windows NT 4.0 operating system is the only software product certified in Russia for using in MC and A systems. In the paper a solution for allowing the installation of this outdated operating system on new computers is discussed. The solution has been successfully tested and has been in use at Luch's network since March 2008. Furthermore, it is being recommended for other Russian enterprises for the same purpose. Introduction Typically, the software part of a nuclear material control and accounting (MC and A) system consists of an operating system (OS), database management systems (DBMS), accounting program itself and database of nuclear materials. Russian regulations require the operating system and database for MC and A be certified for information security, and the whole system must pass an accreditation. Historically, the only certified operating system for MC and A still continues to be Microsoft Windows NT 4.0 Server/Workstation. Attempts to certify newer versions of Windows failed. Luch, like most other Russian sites, uses Microsoft Windows NT 4.0 and SQL Server 6.5. Luch's specialists have developed an application (LuchMAS) for accounting purposes. Starting from about 2004, some problems appeared in Luch's accounting system. They were related to the complexity of installing Windows NT 4.0 on new computers. At first, it was possible to solve the problem choosing computer equipment that is compatible with Windows NT 4.0 or selecting certain operating system settings. Over time, the problem worsened and now it is almost impossible to install Windows NT 4.0 on new computers. The reason is the lack of hardware drivers in the outdated operating system. The problem was serious enough that it could have affected the long-term sustainability of Luch's MC and A system if adequate alternate measures were not developed.
Verification of thermal-hydraulic computer codes against standard problems for WWER reflooding
International Nuclear Information System (INIS)
Alexander D Efanov; Vladimir N Vinogradov; Victor V Sergeev; Oleg A Sudnitsyn
2005-01-01
Full text of publication follows: The computational assessment of reactor core components behavior under accident conditions is impossible without knowledge of the thermal-hydraulic processes occurring in this case. The adequacy of the results obtained using the computer codes to the real processes is verified by carrying out a number of standard problems. In 2000-2003, the fulfillment of three Russian standard problems on WWER core reflooding was arranged using the experiments on full-height electrically heated WWER 37-rod bundle model cooldown in regimes of bottom (SP-1), top (SP-2) and combined (SP-3) reflooding. The representatives from the eight MINATOM's organizations took part in this work, in the course of which the 'blind' and posttest calculations were performed using various versions of the RELAP5, ATHLET, CATHARE, COBRA-TF, TRAP, KORSAR computer codes. The paper presents a brief description of the test facility, test section, test scenarios and conditions as well as the basic results of computational analysis of the experiments. The analysis of the test data revealed a significantly non-one-dimensional nature of cooldown and rewetting of heater rods heated up to a high temperature in a model bundle. This was most pronounced at top and combined reflooding. The verification of the model reflooding computer codes showed that most of computer codes fairly predict the peak rod temperature and the time of bundle cooldown. The exception is provided by the results of calculations with the ATHLET and CATHARE codes. The nature and rate of rewetting front advance in the lower half of the bundle are fairly predicted practically by all computer codes. The disagreement between the calculations and experimental results for the upper half of the bundle is caused by the difficulties of computational simulation of multidimensional effects by 1-D computer codes. In this regard, a quasi-two-dimensional computer code COBRA-TF offers certain advantages. Overall, the closest
Modeling biological problems in computer science: a case study in genome assembly.
Medvedev, Paul
2018-01-30
As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Molchanov, I.N.; Khimich, A.N.
1984-01-01
This article shows how a reflection method can be used to find the eigenvalues of a matrix by transforming the matrix to tridiagonal form. The method of conjugate gradients is used to find the smallest eigenvalue and the corresponding eigenvector of symmetric positive-definite band matrices. Topics considered include the computational scheme of the reflection method, the organization of parallel calculations by the reflection method, the computational scheme of the conjugate gradient method, the organization of parallel calculations by the conjugate gradient method, and the effectiveness of parallel algorithms. It is concluded that it is possible to increase the overall effectiveness of the multiprocessor electronic computers by either letting the newly available processors of a new problem operate in the multiprocessor mode, or by improving the coefficient of uniform partition of the original information
Problems of mineral tax computation in the oil and gas sector
Directory of Open Access Journals (Sweden)
Н. Г. Привалов
2017-04-01
Full Text Available The paper demonstrates the role of mineral tax in the overall sum of tax revenues in the budget. Problems of tax computation and payment have been reviewed; taxpayers and taxation basis of the amount of extracted minerals have been clearly defined. Issues of rental content of natural resource taxes are reviewed, as well as problems of right definition of the rental component in the process of mineral tax calculation for liquid and gaseous hydrocarbons.One of important problems in mineral tax calculation is a conflict between two laws – the Subsoil Law and the Tax Code of Russian Federation (26th chapter. There is an ambiguity in the mechanism of calculating amounts of extracted mineral resources – from the positions of the Tax Code and the Subsoil Law. The second problem is in the necessity to amend the mineral tax for oil extraction the same way as it has been done for gas extraction, when characteristics of each field are taken into account.This will provide a basis for correct computation of the natural resource rent for liquid and gaseous hydrocarbons. The paper offers recommendations for Russian authorities on this issue.
Foley, Greg
2014-01-01
A problem that illustrates two ways of computing the break-even radius of insulation is outlined. The problem is suitable for students who are taking an introductory module in heat transfer or transport phenomena and who have some previous knowledge of the numerical solution of non- linear algebraic equations. The potential for computer algebra,…
Regularization and computational methods for precise solution of perturbed orbit transfer problems
Woollands, Robyn Michele
The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these
Energy Technology Data Exchange (ETDEWEB)
Spoerl, Andreas
2008-06-05
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Directory of Open Access Journals (Sweden)
Emmanuel Okewu
2017-10-01
Full Text Available The role of automation in sustainable development is not in doubt. Computerization in particular has permeated every facet of human endeavour, enhancing the provision of information for decision-making that reduces cost of operation, promotes productivity and socioeconomic prosperity and cohesion. Hence, a new field called information and communication technology for development (ICT4D has emerged. Nonetheless, the need to ensure environmentally friendly computing has led to this research study with particular focus on green computing in Africa. This is against the backdrop that the continent is feared to suffer most from the vulnerability of climate change and the impact of environmental risk. Using Nigeria as a test case, this paper gauges the green computing awareness level of Africans via sample survey. It also attempts to institutionalize green computing maturity model with a view to optimizing the level of citizens awareness amid inherent uncertainties like low bandwidth, poor network and erratic power in an emerging African market. Consequently, we classified the problem as a stochastic optimization problem and applied metaheuristic search algorithm to determine the best sensitization strategy. Although there are alternative ways of promoting green computing education, the metaheuristic search we conducted indicated that an online real-time solution that not only drives but preserves timely conversations on electronic waste (e-waste management and energy saving techniques among the citizenry is cutting edge. The authors therefore reviewed literature, gathered requirements, modelled the proposed solution using Universal Modelling Language (UML and developed a prototype. The proposed solution is a web-based multi-tier e-Green computing system that educates computer users on innovative techniques of managing computers and accessories in an environmentally friendly way. We found out that such a real-time web-based interactive forum does not
Directory of Open Access Journals (Sweden)
Kyncl Martin
2017-01-01
Full Text Available We work with the system of partial differential equations describing the non-stationary compressible turbulent fluid flow. It is a characteristic feature of the hyperbolic equations, that there is a possible raise of discontinuities in solutions, even in the case when the initial conditions are smooth. The fundamental problem in this area is the solution of the so-called Riemann problem for the split Euler equations. It is the elementary problem of the one-dimensional conservation laws with the given initial conditions (LIC - left-hand side, and RIC - right-hand side. The solution of this problem is required in many numerical methods dealing with the 2D/3D fluid flow. The exact (entropy weak solution of this hyperbolical problem cannot be expressed in a closed form, and has to be computed by an iterative process (to given accuracy, therefore various approximations of this solution are being used. The complicated Riemann problem has to be further modified at the close vicinity of boundary, where the LIC is given, while the RIC is not known. Usually, this boundary problem is being linearized, or roughly approximated. The inaccuracies implied by these simplifications may be small, but these have a huge impact on the solution in the whole studied area, especially for the non-stationary flow. Using the thorough analysis of the Riemann problem we show, that the RIC for the local problem can be partially replaced by the suitable complementary conditions. We suggest such complementary conditions accordingly to the desired preference. This way it is possible to construct the boundary conditions by the preference of total values, by preference of pressure, velocity, mass flow, temperature. Further, using the suitable complementary conditions, it is possible to simulate the flow in the vicinity of the diffusible barrier. On the contrary to the initial-value Riemann problem, the solution of such modified problems can be written in the closed form for some
American Society for Testing and Materials. Philadelphia
2007-01-01
1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...
International Nuclear Information System (INIS)
Dan, J.P.; Boving, H.J.; Hintermann, H.E.
1993-01-01
Hard, wear resistant and low friction coatings are presently produced on a world-wide basis, by different processes such as electrochemical or electroless methods, spray technologies, thermochemical, CVD and PVD. Some of the most advanced processes, especially those dedicated to thin film depositions, basically belong to CVD or PVD technologies, and will be looked at in more detail. The hard coatings mainly consist of oxides, nitrides, carbides, borides or carbon. Over the years, many processes have been developed which are variations and/or combinations of the basic CVD and PVD methods. The main difference between these two families of deposition techniques is that the CVD is an elevated temperature process (≥ 700 C), while the PVD on the contrary, is rather a low temperature process (≤ 500 C); this of course influences the choice of substrates and properties of the coating/substrate systems. Fundamental aspects of the vapor phase deposition techniques and some of their influences on coating properties will be discussed, as well as the very important interactions between deposit and substrate: diffusions, internal stress, etc. Advantages and limitations of CVD and PVD respectively will briefly be reviewed and examples of applications of the layers will be given. Parallel to the development and permanent updating of surface modification technologies, an effort was made to create novel characterisation methods. A close look will be given to the coating adherence control by means of the scratch test, at the coating hardness measurement by means of nanoindentation, at the coating wear resistance by means of a pin-on-disc tribometer, and at the surface quality evaluation by Atomic Force Microscopy (AFM). Finally, main important trends will be highlighted. (orig.)
Directory of Open Access Journals (Sweden)
Fushing Hsieh
2016-11-01
Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.
Directory of Open Access Journals (Sweden)
Savita Mallikarjun
2016-01-01
Full Text Available Aims: To assess and compare the thickness of gingiva in the anterior maxilla using radiovisiography (RVG and cone beam computed tomography (CBCT and its correlation with the thickness of underlying alveolar bone. Settings and Design: This cross-sectional study included 10 male subjects in the age group of 20–45 years. Materials and Methods: After analyzing the width of keratinized gingiva of the maxillary right central incisor, the radiographic assessment was done using a modified technique for RVG and CBCT, to measure the thickness of both the labial gingiva and labial plate of alveolar bone at 4 predetermined locations along the length of the root in each case. Statistical Analysis Used: Statistical analysis was performed using Student's t-test and Pearson's correlation test, with the help of statistical software (SPSS V13. Results: No statistically significant differences were obtained in the measurement made using RVG and CBCT. The results of the present study also failed to reveal any significant correlation between the width of gingiva and the alveolar bone in the maxillary anterior region. Conclusions: Within the limitations of this study, it can be concluded that both CBCT and RVG can be used as valuable tools in the assessment of the soft and hard tissue dimensions.
Improved iterative image reconstruction algorithm for the exterior problem of computed tomography
International Nuclear Information System (INIS)
Guo, Yumeng; Zeng, Li
2017-01-01
In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.
Energy Technology Data Exchange (ETDEWEB)
Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-08-01
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.
Improved iterative image reconstruction algorithm for the exterior problem of computed tomography
Energy Technology Data Exchange (ETDEWEB)
Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)
2017-01-11
In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Parallel Object-Oriented Computation Applied to a Finite Element Problem
Directory of Open Access Journals (Sweden)
Jon B. Weissman
1993-01-01
Full Text Available The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.
A shared computer-based problem-oriented patient record for the primary care team.
Linnarsson, R; Nordgren, K
1995-01-01
1. INTRODUCTION. A computer-based patient record (CPR) system, Swedestar, has been developed for use in primary health care. The principal aim of the system is to support continuous quality improvement through improved information handling, improved decision-making, and improved procedures for quality assurance. The Swedestar system has evolved during a ten-year period beginning in 1984. 2. SYSTEM DESIGN. The design philosophy is based on the following key factors: a shared, problem-oriented patient record; structured data entry based on an extensive controlled vocabulary; advanced search and query functions, where the query language has the most important role; integrated decision support for drug prescribing and care protocols and guidelines; integrated procedures for quality assurance. 3. A SHARED PROBLEM-ORIENTED PATIENT RECORD. The core of the CPR system is the problem-oriented patient record. All problems of one patient, recorded by different members of the care team, are displayed on the problem list. Starting from this list, a problem follow-up can be made, one problem at a time or for several problems simultaneously. Thus, it is possible to get an integrated view, across provider categories, of those problems of one patient that belong together. This shared problem-oriented patient record provides an important basis for the primary care team work. 4. INTEGRATED DECISION SUPPORT. The decision support of the system includes a drug prescribing module and a care protocol module. The drug prescribing module is integrated with the patient records and includes an on-line check of the patient's medication list for potential interactions and data-driven reminders concerning major drug problems. Care protocols have been developed for the most common chronic diseases, such as asthma, diabetes, and hypertension. The patient records can be automatically checked according to the care protocols. 5. PRACTICAL EXPERIENCE. The Swedestar system has been implemented in a
Reduction of community alcohol problems: computer simulation experiments in three counties.
Holder, H D; Blose, J O
1987-03-01
A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.
International Nuclear Information System (INIS)
Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P
2016-01-01
Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)
On the Computation of the Efficient Frontier of the Portfolio Selection Problem
Directory of Open Access Journals (Sweden)
Clara Calvo
2012-01-01
Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.
Computer-assisted mammography in clinical practice: Another set of problems to solve
International Nuclear Information System (INIS)
Gale, A.G.; Roebuck, E.J.; Worthington, B.S.
1986-01-01
To be adopted in radiological practice, computer-assisted diagnosis must address a domain of realistic complexity and have a high performance in terms of speed and reliability. Use of a microcomputer-based system of mammographic diagnoses employing discriminant function analysis resulted in significantly fewer false-positive diagnoses while producing a similar level of correct diagnoses of cancer as normal reporting. Although such a system is a valuable teaching aid, its clinical use is constrained by the problems of unambiguously codifying descriptors, data entry time, and the tendency of radiologists to override predicted diagnoses which conflict with their own
An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control
Directory of Open Access Journals (Sweden)
Aksenov Alexey Y.
2014-09-01
Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.
A New Optimization Model for Computer-Aided Molecular Design Problems
DEFF Research Database (Denmark)
Zhang, Lei; Cignitti, Stefano; Gani, Rafiqul
Computer-Aided Molecular Design (CAMD) is a method to design molecules with desired properties. That is, through CAMD, it is possible to generate molecules that match a specified set of target properties. CAMD has attracted much attention in recent years due to its ability to design novel as well...... with structure information considered due to the increased size of the mathematical problem and number of alternatives. Thus, decomposition-based approach is proposed to solve the problem. In this approach, only first-order groups are considered in the first step to obtain the building block of the designed...... molecule, then the property model is refined with second-order groups based on the results of the first step. However, this may result in the possibility of an optimal solution being excluded. Samudra and Sahinidis [4] used property relaxation method in the first step to avoid this situation...
Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique
Directory of Open Access Journals (Sweden)
Nur Azzammudin Rahmat
2016-06-01
Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.
CASKETSS-HEAT: a finite difference computer program for nonlinear heat conduction problems
International Nuclear Information System (INIS)
Ikushima, Takeshi
1988-12-01
A heat conduction program CASKETSS-HEAT has been developed. CASKETSS-HEAT is a finite difference computer program used for the solution of multi-dimensional nonlinear heat conduction problems. Main features of CASKETSS-HEAT are as follows. (1) One, two and three-dimensional geometries for heat conduction calculation are available. (2) Convection and radiation heat transfer of boundry can be specified. (3) Phase change and chemical change can be treated. (4) Finned surface heat transfer can be treated easily. (5) Data memory allocation in the program is variable according to problem size. (6) The program is a compatible heat transfer analysis program to the stress analysis program SAP4 and SAP5. (7) Pre- and post-processing for input data generation and graphic representation of calculation results are available. In the paper, brief illustration of calculation method, input data and sample calculation are presented. (author)
Simplified computational methods for elastic and elastic-plastic fracture problems
Atluri, Satya N.
1992-01-01
An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.
Problem of long-range forces in the computer simulation of condensed media
International Nuclear Information System (INIS)
Ceperely, D.
1980-07-01
Simulation (both Monte Carlo and molecular dynamical) has become a powerful tool in the study of classical systems of particles interacting with short-range pair potentials. For systems involving long-range forces (e.g., Coulombic, dipolar, hydrodynamic) it is a different story. Relating infinite-system properties to the results of computer simulation involving relatively small numbers of particles, periodically replicated, raises difficult and challenging problems. The purpose of the workshop was to bring together a group of scientists, all of whom share a strong direct interest in clearly formulating and resolving these problems. There were 46 participants, most of whom have been actively engaged in simulations of Hamiltonian models of condensed media. A few participants were scientists who are not primarily concerned, themselves, with simulation, but who are deeply involved in the theory of such models
Resource allocation on computational grids using a utility model and the knapsack problem
Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J
2009-01-01
This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.
Identification and simulation of the power quality problems using computer models
International Nuclear Information System (INIS)
Abro, M.R.; Memon, A.P.; Memon, Z.A.
2005-01-01
The Power Quality has become the main factor in our life. If this quality of power is being polluted over the Electrical Power Network, serious problems could arise within the modem social structure and its conveniences. The Nonlinear Characteristics of various office and Industrial equipment connected to the power grid could cause electrical disturbances to poor power quality. In many cases the electric power consumed is first converted to different form and such conversion process introduces harmonic pollution in the grid. These electrical disturbances could destroy certain sensitive equipment connected to the grid or in some cases could cause them to malfunction. In the huge power network identifying the source of such disturbance without causing interruption to the supply is a big problem. This paper attempts to study the power quality problem caused by typical loads using computer models paving the way to identify the source of the problem. PSB (Power System Blockset) Toolbox of MATLAB is used for this paper, which is designed to provide modem tool that rapidly and easily builds models and simulates the power system. The blockset uses the Simulink environment, allowing a model to be built using simple click and drag procedures. (author)
Computing the Fréchet distance between folded polygons
Cook IV, A.F.; Driemel, A.; Sherette, J.; Wenk, C.
2015-01-01
Computing the Fréchet distance for surfaces is a surprisingly hard problem and the only known polynomial-time algorithm is limited to computing it between flat surfaces. We study the problem of computing the Fréchet distance for a class of non-flat surfaces called folded polygons. We present a
Fundamental challenging problems for developing new nuclear safety standard computer codes
International Nuclear Information System (INIS)
Wong, P.K.; Wong, A.E.; Wong, A.
2005-01-01
Based on the claims of the US Basic patents number 5,084,232; 5,848,377 and 6,430,516 that can be obtained from typing the Patent Numbers into the Box of the Web site http://164.195.100.11/netahtml/srchnum.htm and their associated published technical papers having been presented and published at International Conferences in the last three years and that all these had been sent into US-NRC by E-mail on March 26, 2003 at 2:46 PM., three fundamental challenging problems for developing new nuclear safety standard computer codes had been presented at the US-NRC RIC2003 Session W4. 2:15-3:15 PM. at the Washington D.C. Capital Hilton Hotel, Presidential Ballroom on April 16, 2003 in front of more than 800 nuclear professionals from many countries worldwide. The objective and scope of this paper is to invite all nuclear professionals to examine and evaluate all the current computer codes being used in their own countries by means of comparison of numerical data from these three specific openly challenging fundamental problems in order to set up a global safety standard for all nuclear power plants in the world. (authors)
Aryal, Bijaya
2016-03-01
We have studied the impacts of web-based Computer Coaches on educational outputs and outcomes. This presentation will describe the technical and conceptual framework related to the Coaches and discuss undergraduate students' favorability of the Coaches. Moreover, its impacts on students' physics problem solving performance and on their conceptual understanding of physics will be reported. We used a qualitative research technique to collect and analyze interview data from 19 undergraduate students who used the Coaches in the interview setting. The empirical results show that the favorability and efficacy of the Computer Coaches differ considerably across students of different educational backgrounds, preparation levels, attitudes and epistemologies about physics learning. The interview data shows that female students tend to have more favorability supporting the use of the Coach. Likewise, our assessment suggests that female students seem to benefit more from the Coaches in their problem solving performance and in conceptual learning of physics. Finally, the analysis finds evidence that the Coach has potential for increasing efficiency in usage and for improving students' educational outputs and outcomes under its customized usage. This work was partially supported by the Center for Educational Innovation, Office of the Senior Vice President for Academic Affairs and Provost, University of Minnesota.
Kuncoro, K. S.; Junaedi, I.; Dwijanto
2018-03-01
This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.
MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide
Energy Technology Data Exchange (ETDEWEB)
Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H. [and others
1996-09-01
This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.
Scilab software as an alternative low-cost computing in solving the linear equations problem
Agus, Fahrul; Haviluddin
2017-02-01
Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.
Parallel computing in cluster of GPU applied to a problem of nuclear engineering
International Nuclear Information System (INIS)
Moraes, Sergio Ricardo S.; Heimlich, Adino; Resende, Pedro
2013-01-01
Cluster computing has been widely used as a low cost alternative for parallel processing in scientific applications. With the use of Message-Passing Interface (MPI) protocol development became even more accessible and widespread in the scientific community. A more recent trend is the use of Graphic Processing Unit (GPU), which is a powerful co-processor able to perform hundreds of instructions in parallel, reaching a capacity of hundreds of times the processing of a CPU. However, a standard PC does not allow, in general, more than two GPUs. Hence, it is proposed in this work development and evaluation of a hybrid low cost parallel approach to the solution to a nuclear engineering typical problem. The idea is to use clusters parallelism technology (MPI) together with GPU programming techniques (CUDA - Compute Unified Device Architecture) to simulate neutron transport through a slab using Monte Carlo method. By using a cluster comprised by four quad-core computers with 2 GPU each, it has been developed programs using MPI and CUDA technologies. Experiments, applying different configurations, from 1 to 8 GPUs has been performed and results were compared with the sequential (non-parallel) version. A speed up of about 2.000 times has been observed when comparing the 8-GPU with the sequential version. Results here presented are discussed and analyzed with the objective of outlining gains and possible limitations of the proposed approach. (author)
Experimental realization of a one-way quantum computer algorithm solving Simon's problem.
Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G
2014-11-14
We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.
The Sizing and Optimization Language, (SOL): Computer language for design problems
Lucas, Stephen H.; Scotti, Stephen J.
1988-01-01
The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.
Testan, Peter R.
1987-04-01
A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected
Energy Technology Data Exchange (ETDEWEB)
Krylov, V.A.; Pisarenko, V.P.
1982-01-01
Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).
Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin
2015-01-01
We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…
Hickendorff, Marian
2013-01-01
The results of an exploratory study into measurement of elementary mathematics ability are presented. The focus is on the abilities involved in solving standard computation problems on the one hand and problems presented in a realistic context on the other. The objectives were to assess to what extent these abilities are shared or distinct, and…
2014-01-01
Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270
Performance of popular open source databases for HEP related computing problems
International Nuclear Information System (INIS)
Kovalskyi, D; Sfiligoi, I; Wuerthwein, F; Yagil, A
2014-01-01
Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.
Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.
2014-01-01
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
Improving the computation efficiency of COBRA-TF for LWR safety analysis of large problems
International Nuclear Information System (INIS)
Cuervo, D.; Avramova, M. N.; Ivanov, K. N.
2004-01-01
A matrix solver is implemented in COBRA-TF in order to improve the computation efficiency of both numerical solution methods existing in the code, the Gauss elimination and the Gauss-Seidel iterative technique. Both methods are used to solve the system of pressure linear equations and relay on the solution of large sparse matrices. The introduced solver accelerates the solution of these matrices in cases of large number of cells. The execution time is reduced in half as compared to the execution time without using matrix solver for the cases with large matrices. The achieved improvement and the planned future work in this direction are important for performing efficient LWR safety analyses of large problems. (authors)
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
Grolet, Aurelien; Thouverez, Fabrice
2015-02-01
This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.
Dean, David S.; Majumdar, Satya N.
2002-08-01
We study a fragmentation problem where an initial object of size x is broken into m random pieces provided x > x0 where x0 is an atomic cut-off. Subsequently, the fragmentation process continues for each of those daughter pieces whose sizes are bigger than x0. The process stops when all the fragments have sizes smaller than x0. We show that the fluctuation of the total number of splitting events, characterized by the variance, generically undergoes a nontrivial phase transition as one tunes the branching number m through a critical value m = mc. For m mc they are anomalously large and non-Gaussian. We apply this general result to analyse two different search algorithms in computer science.
Collier, Nathan
2014-09-17
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
On turning waves for the inhomogeneous Muskat problem: a computer-assisted proof
International Nuclear Information System (INIS)
Gómez-Serrano, Javier; Granero-Belinchón, Rafael
2014-01-01
We exhibit a family of graphs that develop turning singularities (i.e. their Lipschitz seminorm blows up and they cease to be a graph, passing from the stable to the unstable regime) for the inhomogeneous, two-phase Muskat problem where the permeability is given by a nonnegative step function. We study the influence of different choices of the permeability and different boundary conditions (both at infinity and considering finite/infinite depth) in the development or prevention of singularities for short time. In the general case (inhomogeneous, confined) we prove a bifurcation diagram concerning the appearance or not of singularities when the depth of the medium and the permeabilities change. The proofs are carried out using a combination of classical analysis techniques and computer-assisted verification. (paper)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
International Nuclear Information System (INIS)
Ivanov, V.G.
1978-01-01
To process picture information on the basis of BESM-6 and CDC-6500 computers, Joint Institute for Nuclear Research has developed a set of programs which enables the user to restore a spatial picture of measured events and calculate track parameters, as well as kinematically identify the events and to select most probable hypotheses for each event. A wide-scale use of programs which process picture data obtained via various track chambers requires quite a number of different options of each program. For this purpose, a special program, PATCHY editor, has been developed to update, edit and assemble large programs. Therefore, a partitioned structure of the programs has been chosen which considerably reduces programming time. Basic problems of picture processing software are discussed and the fact that availability of terminal equipment for BESM-6 and CDC-6500 computers will help to increase the processing speed and to implement interactive mode is pointed out. It is also planned to develop a training system to help the user learn how to use the programs of the system
International Nuclear Information System (INIS)
Horesh, L; Haber, E
2009-01-01
The l 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
6th International Workshop on New Computational Methods for Inverse Problems
International Nuclear Information System (INIS)
2016-01-01
Foreword This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 6 th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2016 (http://complement.farman.ens-cachan.fr/NCMIP 2016.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 20, 2016. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013, May 2014 and May 2015. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists in estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one- day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel
Directory of Open Access Journals (Sweden)
Jorge Patiño
2016-01-01
Full Text Available This paper presents an evaluation performance of computational intelligence algorithms based on the multiobjective theory for the solution of the Routing and Wavelength Assignment problem (RWA in optical networks. The study evaluates the Firefly Algorithm, the Differential Evolutionary Algorithm, the Simulated Annealing Algorithm and two versions of the Particle Swarm Optimization algorithm. The paper provides a description of the multiobjective algorithms; then, an evaluation based on the performance provided by the multiobjective algorithms versus mono-objective approaches when dealing with different traffic loads, different numberof wavelengths and wavelength conversion process over the NSFNet topology is presented. Simulation results show that monoobjective algorithms properly solve the RWA problem for low values of data traffic and low number of wavelengths. However, the multiobjective approaches adapt better to online traffic when the number of wavelengths available in the network increases as well as when wavelength conversion is implemented in the nodes.
International Nuclear Information System (INIS)
Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern
2010-01-01
This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.
COOBBO: A Novel Opposition-Based Soft Computing Algorithm for TSP Problems
Directory of Open Access Journals (Sweden)
Qingzheng Xu
2014-12-01
Full Text Available In this paper, we propose a novel definition of opposite path. Its core feature is that the sequence of candidate paths and the distances between adjacent nodes in the tour are considered simultaneously. In a sense, the candidate path and its corresponding opposite path have the same (or similar at least distance to the optimal path in the current population. Based on an accepted framework for employing opposition-based learning, Oppositional Biogeography-Based Optimization using the Current Optimum, called COOBBO algorithm, is introduced to solve traveling salesman problems. We demonstrate its performance on eight benchmark problems and compare it with other optimization algorithms. Simulation results illustrate that the excellent performance of our proposed algorithm is attributed to the distinct definition of opposite path. In addition, its great strength lies in exploitation for enhancing the solution accuracy, not exploration for improving the population diversity. Finally, by comparing different version of COOBBO, another conclusion is that each successful opposition-based soft computing algorithm needs to adjust and remain a good balance between backward adjacent node and forward adjacent node.
Fast Computation of Categorical Richness on Raster Data Sets and Related Problems
DEFF Research Database (Denmark)
de Berg, Mark; Tsirogiannis, Constantinos; Wilkinson, Bryan
2015-01-01
that runs in O(n) time and one for circular windows that runs in O((1+K/r)n) time, where K is the number of different categories appearing in G. The algorithms are not only very efficient in theory, but also in practice: our experiments show that our algorithms can handle raster data sets of hundreds...... of millions of cells. The categorical richness problem is related to colored range counting, where the goal is to preprocess a colored point set such that we can efficiently count the number of colors appearing inside a query range. We present a data structure for colored range counting in R^2 for the case......In many scientific fields, it is common to encounter raster data sets consisting of categorical data, such as soil type or land usage of a terrain. A problem that arises in the presence of such data is the following: given a raster G of n cells storing categorical data, compute for every cell c...
Solution of optimization problems by means of the CASTEM 2000 computer code
International Nuclear Information System (INIS)
Charras, Th.; Millard, A.; Verpeaux, P.
1991-01-01
In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
Fundamental measure theory for hard-sphere mixtures: a review
International Nuclear Information System (INIS)
Roth, Roland
2010-01-01
Hard-sphere systems are one of the fundamental model systems of statistical physics and represent an important reference system for molecular or colloidal systems with soft repulsive or attractive interactions in addition to hard-core repulsion at short distances. Density functional theory for classical systems, as one of the core theoretical approaches of statistical physics of fluids and solids, has to be able to treat such an important system successfully and accurately. Fundamental measure theory is up to date the most successful and most accurate density functional theory for hard-sphere mixtures. Since its introduction fundamental measure theory has been applied to many problems, tested against computer simulations, and further developed in many respects. The literature on fundamental measure theory is already large and is growing fast. This review aims to provide a starting point for readers new to fundamental measure theory and an overview of important developments. (topical review)
Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia
2013-01-01
The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
Lobe, Elisabeth; Stollenwerk, Tobias; Tröltzsch, Anke
2015-01-01
In the recent years, the field of adiabatic quantum computing has gained importance due to the advances in the realisation of such machines, especially by the company D-Wave Systems. These machines are suited to solve discrete optimisation problems which are typically very hard to solve on a classical computer. Due to the quantum nature of the device it is assumed that there is a substantial speedup compared to classical HPC facilities. We explain the basic principles of adiabatic ...
International Nuclear Information System (INIS)
Maslenikov, O.R.; Johnson, J.J.; Tiong, L.W.; Mraz, M.J.; Bumpus, S.; Gerhard, M.A.
1985-03-01
In this volume of the SMACS User's Manual an example problem is presented to demonstrate the type of problem that SMACS is capable of solving and to familiarize the user with format of the various data files involved. This volume is organized into thirteen appendices which follow a short description of the problem. Each appendix contains listings of the input and output files associated with each computer run that was necessary to solve the problem. In cases where one SMACS program uses data generated by another SMACS program, the data file is shown in the appendix for the programs which generated it
Chen, Hudong
2001-06-01
There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward
Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón
2015-01-01
Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...
Ohl, Ricky
In this case study, computer supported argument visualisation has been applied to the analysis and representation of the draft South East Queensland Regional Plan Consultation discourse, demonstrating how argument mapping can help deliver the transparency and accountability required in participatory democracy. Consultative democracy for regional planning falls into a category of problems known as “wicked problems”. Inherent in this environment is heterogeneous viewpoints, agendas and voices, built on disparate and often contradictory logic. An argument ontology and notation that was designed specifically to deal with consultative urban planning around wicked problems is the Issue Based Information System (IBIS) and IBIS notation (Rittel & Webber, 1984). The software used for argument visualisation in this case was Compendium, a derivative of IBIS. The high volume of stakeholders and discourse heterogeneity in this environment calls for a unique approach to argument mapping. The map design model developed from this research has been titled a “Consultation Map”. The design incorporates the IBIS ontology within a hybrid of mapping approaches, amalgamating elements from concept, dialogue, argument, debate, thematic and tree-mapping. The consultation maps developed from the draft South East Queensland Regional Plan Consultation provide a transparent visual record to give evidence of the themes of citizen issues within the consultation discourse. The consultation maps also link the elicited discourse themes to related policies from the SEQ Regional Plan providing explicit evidence of SEQ Regional Plan policy-decisions matching citizen concerns. The final consultation map in the series provides explicit links between SEQ Regional Plan policy items and monitoring activities reporting on the ongoing implementation of the SEQ Regional Plan. This map provides updatable evidence of and accountability for SEQ Regional Plan policy implementation and developments.
Directory of Open Access Journals (Sweden)
Zahra Pourabdollahi
2017-12-01
Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.
Aeschbacher, S; Futschik, A; Beaumont, M A
2013-02-01
We propose a two-step procedure for estimating multiple migration rates in an approximate Bayesian computation (ABC) framework, accounting for global nuisance parameters. The approach is not limited to migration, but generally of interest for inference problems with multiple parameters and a modular structure (e.g. independent sets of demes or loci). We condition on a known, but complex demographic model of a spatially subdivided population, motivated by the reintroduction of Alpine ibex (Capra ibex) into Switzerland. In the first step, the global parameters ancestral mutation rate and male mating skew have been estimated for the whole population in Aeschbacher et al. (Genetics 2012; 192: 1027). In the second step, we estimate in this study the migration rates independently for clusters of demes putatively connected by migration. For large clusters (many migration rates), ABC faces the problem of too many summary statistics. We therefore assess by simulation if estimation per pair of demes is a valid alternative. We find that the trade-off between reduced dimensionality for the pairwise estimation on the one hand and lower accuracy due to the assumption of pairwise independence on the other depends on the number of migration rates to be inferred: the accuracy of the pairwise approach increases with the number of parameters, relative to the joint estimation approach. To distinguish between low and zero migration, we perform ABC-type model comparison between a model with migration and one without. Applying the approach to microsatellite data from Alpine ibex, we find no evidence for substantial gene flow via migration, except for one pair of demes in one direction. © 2013 Blackwell Publishing Ltd.
Some Student Problems: Bungi Jumping, Maglev Trains, and Misaligned Computer Monitors.
Whineray, Scott
1991-01-01
Presented are three physics problems from the New Zealand Entrance Scholarship examinations which are generally attempted by more able students. Problem situations, illustrations, and solutions are detailed. (CW)
Papadopoulou, Maria P; Nikolos, Ioannis K; Karatzas, George P
2010-01-01
Artificial Neural Networks (ANNs) comprise a powerful tool to approximate the complicated behavior and response of physical systems allowing considerable reduction in computation time during time-consuming optimization runs. In this work, a Radial Basis Function Artificial Neural Network (RBFN) is combined with a Differential Evolution (DE) algorithm to solve a water resources management problem, using an optimization procedure. The objective of the optimization scheme is to cover the daily water demand on the coastal aquifer east of the city of Heraklion, Crete, without reducing the subsurface water quality due to seawater intrusion. The RBFN is utilized as an on-line surrogate model to approximate the behavior of the aquifer and to replace some of the costly evaluations of an accurate numerical simulation model which solves the subsurface water flow differential equations. The RBFN is used as a local approximation model in such a way as to maintain the robustness of the DE algorithm. The results of this procedure are compared to the corresponding results obtained by using the Simplex method and by using the DE procedure without the surrogate model. As it is demonstrated, the use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.
Directory of Open Access Journals (Sweden)
O. Demir
2016-01-01
Full Text Available In this study, to investigate and understand the nature of fracture behavior properly under in-plane mixed mode (Mode-I/II loading, three-dimensional fracture analyses and experiments of compact tension shear (CTS specimen are performed under different mixed mode loading conditions. Al 7075-T651 aluminum machined from rolled plates in the L-T rolling direction (crack plane is perpendicular to the rolling direction is used in this study. Results from finite element analyses and fracture loads, crack deflection angles obtained from the experiments are presented. To simulate the real conditions in the experiments, contacts are defined between the contact surfaces of the loading devices, specimen and loading pins. Modeling, meshing and the solution of the problem involving the whole assembly, i.e., loading devices, pins and the specimen, with contact mechanics are performed using ANSYSTM. Then, CTS specimen is analyzed separately using a submodeling approach, in which three-dimensional enriched finite elements are used in FRAC3D solver to calculate the resulting stress intensity factors along the crack front. Having performed the detailed computational and experimental studies on the CTS specimen, a new specimen type together with its loading device is also proposed that has smaller dimensions compared to the regular CTS specimen. Experimental results for the new specimen are also presented.
Use of a genetic algorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer
International Nuclear Information System (INIS)
Pryor, R.J.; Cline, D.D.
1992-01-01
A method of solving the two-phase fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented
Use of a genetic agorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer
International Nuclear Information System (INIS)
Pryor, R.J.; Cline, D.D.
1993-01-01
A method of solving the two-phases fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unkowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented. (orig.)
Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era
Energy Technology Data Exchange (ETDEWEB)
Balman, Mehmet; Byna, Surendra
2011-12-06
Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.
International Nuclear Information System (INIS)
Sakamoto, Yukio; Naito, Yoshitaka
1990-11-01
A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Czech Academy of Sciences Publication Activity Database
Červinka, Michal
2010-01-01
Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf
Explaining the Mind: Problems, Problems
Harnad, Stevan
2001-01-01
The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble (unless one is ready to resort to telekinetic dualism). Fortunately, the "easy" problems of cognitive science (such as the how and why of categorization and language) are not insoluble. Five books (by Damasio, Edelman/Tononi...
Computer-related vision problems in Osogbo, south-western Nigeria ...
African Journals Online (AJOL)
Widespread use of computers for office work and e-learning has resulted in increased visual demands among computer users. The increased visual demands have led to development of ocular complaints and discomfort among users. The objective of this study is to determine the prevalence of computer related eye ...
Hard equality constrained integer knapsacks
Aardal, K.I.; Lenstra, A.K.; Cook, W.J.; Schulz, A.S.
2002-01-01
We consider the following integer feasibility problem: "Given positive integer numbers a 0, a 1,..., a n, with gcd(a 1,..., a n) = 1 and a = (a 1,..., a n), does there exist a nonnegative integer vector x satisfying ax = a 0?" Some instances of this type have been found to be extremely hard to solve
Assessment of computational fluid dynamics (CFD) for nuclear reactor safety problems
International Nuclear Information System (INIS)
Smith, B. L.; Andreani, M.; Bieder, U.; Bestion, D.; Ducros, F.; Graffard, E.; Heitsch, M.; Scheuerer, M.; Henriksson, M.; Hoehne, T.; Rohde, U.; Lucas, D.; Komen, E.; Houkema, M.; Mahaffy, J.; Moretti, F.; Morii, T.; Muehlbauer, P.; Song, C.H.; Zigh, G.; Menter, F.; Watanabe, T.
2008-01-01
The basic objective of the present work was to provide documented evidence of the need to perform CFD simulations in Nuclear Reactor Safety (NRS), concentrating on single-phase applications, and to assess the competence of the present generation of CFD codes to perform these simulations reliably. The fulfilling of this objective involves multiple tasks, summarized as: to provide a classification of NRS problems requiring CFD analysis, to identify and catalogue existing CFD assessment bases, to identify shortcomings in CFD approaches, to put into place a means for extending the CFD assessment database, with an emphasis on NRS applications. The resulting document is presented here. After some introductory remarks, chapter 3 lists twenty-two NRS issues for which it is considered that the application of CFD would bring real benefits in terms of better predictive capability. This classification is followed by a short description of the safety issue, a state-of-the-art summary of what has been attempted, and what is still needed to be done to improve reliability. Chapter 4 details the assessment bases that have already been established in both the nuclear and non-nuclear domains, and discusses the usefulness and relevance of the work to NRS applications, where appropriate. This information is augmented in Chapter 5 by descriptions of the existing CFD assessment bases that have been established around specific, NRS problems. Typical examples are experiments devoted to the boron dilution issue, pressurised thermal shock, and thermal fatigue in pipes. Chapter 6 is devoted to identifying the technology gaps which need to be closed to make CFD a more trustworthy analytical tool. Some deficiencies identified are lack of a Phenomenon Identification and Ranking Table (PIRT), limitations in the range of application of turbulence models, coupling of CFD with neutronics and system codes, and computer power limitations. Most CFD codes currently being used have their own, custom
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
NP-hardness of decoding quantum error-correction codes
International Nuclear Information System (INIS)
Hsieh, Min-Hsiu; Le Gall, Francois
2011-01-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir
2016-12-01
The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.
Mead, C.; Horodyskyj, L.; Buxner, S.; Semken, S. C.; Anbar, A. D.
2016-12-01
Developing scientific reasoning skills is a common learning objective for general-education science courses. However, effective assessments for such skills typically involve open-ended questions or tasks, which must be hand-scored and may not be usable online. Using computer-based learning environments, reasoning can be assessed automatically by analyzing student actions within the learning environment. We describe such an assessment under development and present pilot results. In our content-neutral instrument, students solve a problem by collecting and interpreting data in a logical, systematic manner. We then infer reasoning skill automatically based on student actions. Specifically, students investigate why Earth has seasons, a scientifically simple but commonly misunderstood topic. Students are given three possible explanations and asked to select a set of locations on a world map from which to collect temperature data. They then explain how the data support or refute each explanation. The best approaches will use locations in both the Northern and Southern hemispheres to argue that the contrasting seasonality of the hemispheres supports only the correct explanation. We administered a pilot version to students at the beginning of an online, introductory science course (n = 223) as an optional extra credit exercise. We were able to categorize students' data collection decisions as more and less logically sound. Students who choose the most logical measurement locations earned higher course grades, but not significantly higher. This result is encouraging, but not definitive. In the future, we will clarify our results in two ways. First, we plan to incorporate more open-ended interactions into the assessment to improve the resolving power of this tool. Second, to avoid relying on course grades, we will independently measure reasoning skill with one of the existing hand-scored assessments (e.g., Critical Thinking Assessment Test) to cross-validate our new
Computational methods for the nuclear and neutron matter problems: Progress report
International Nuclear Information System (INIS)
Kalos, M.H.
1989-01-01
This proposal is concerned with the use of Monte Carlo methods as a numerical technique in the study of nuclear structure. The straightforward use of Monte Carlo in nuclear physics has been impeded by certain technical difficulties. Foremost among them is the fact that numerical integration of the Schr/umlt o/dinger equation, by now straightforward for the ground state of boson systems, is substantially more difficult for many-fermion systems. The first part of this proposal outlines a synthesis of several advances into a single experimental algorithm. The proposed work is to implement and study the properties of the algorithm with simple models of few-body nuclei as the physical system to be investigated. Variational Monte Carlo remains an extremely powerful and useful method. Its application to nuclear structure physics presents unique difficulties. The varieties of interactions in the phenomenological potentials must be reflected in a corresponding richness of the correlations in accurate trial wave functions. Then the sheer number of terms in such trial fashions written as a product of pairs presents specific difficulties. We have had good success in our first experiments on a random field method that decouples the interactions and propose to extend our research to 16 O and to p-shell nuclei. Spin-orbit terms present special problems as well, because the implied gradient operators must be applied repeatedly. We propose to treat them in first order only, for now, and to calculate the result in three- and four-body nuclei. We propose a new Monte Carlo method for computing the amplitude of deuteron components in trial functions for heavier nuclei (here, specifically for 6 Li). The method is an extension of that used for off-diagonal matrix elements in quantum fluids
Vekli, Gülsah Sezen; Çimer, Atilla
2017-01-01
This study investigated development of students' scientific argumentation levels in the applications made with Problem-Based Computer-Aided Material (PBCAM) designed about Human Endocrine System. The case study method was used: The study group was formed of 43 students in the 11th grade of the science high school in Rize. Human Endocrine System…
DEFF Research Database (Denmark)
Iris, Cagatay; Pacino, Dario; Røpke, Stefan
of the vessels primarily depends on the number of containers to be handled and the number of cranes deployed, it would be beneficial to consider the integration of those two problems. This work extends the state-of-the-art by strengthening the current best mathematical formulation. Computational experiments...
International Nuclear Information System (INIS)
Bardhan, Jaydeep P; Knepley, Matthew G
2012-01-01
We present two open-source (BSD) implementations of ellipsoidal harmonic expansions for solving problems of potential theory using separation of variables. Ellipsoidal harmonics are used surprisingly infrequently, considering their substantial value for problems ranging in scale from molecules to the entire solar system. In this paper, we suggest two possible reasons for the paucity relative to spherical harmonics. The first is essentially historical—ellipsoidal harmonics developed during the late 19th century and early 20th, when it was found that only the lowest-order harmonics are expressible in closed form. Each higher-order term requires the solution of an eigenvalue problem, and tedious manual computation seems to have discouraged applications and theoretical studies. The second explanation is practical: even with modern computers and accurate eigenvalue algorithms, expansions in ellipsoidal harmonics are significantly more challenging to compute than those in Cartesian or spherical coordinates. The present implementations reduce the 'barrier to entry' by providing an easy and free way for the community to begin using ellipsoidal harmonics in actual research. We demonstrate our implementation using the specific and physiologically crucial problem of how charged proteins interact with their environment, and ask: what other analytical tools await re-discovery in an era of inexpensive computation?
Beal, Carole R.; Rosenblum, L. Penny
2018-01-01
Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…
Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.
2011-01-01
We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method
Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia
2018-01-01
As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Tomar, S.K.
2002-01-01
It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.
Exact sampling hardness of Ising spin models
Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.
2017-09-01
We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.
Haplotyping Problem, A Clustering Approach
International Nuclear Information System (INIS)
Eslahchi, Changiz; Sadeghi, Mehdi; Pezeshk, Hamid; Kargar, Mehdi; Poormohammadi, Hadi
2007-01-01
Construction of two haplotypes from a set of Single Nucleotide Polymorphism (SNP) fragments is called haplotype reconstruction problem. One of the most popular computational model for this problem is Minimum Error Correction (MEC). Since MEC is an NP-hard problem, here we propose a novel heuristic algorithm based on clustering analysis in data mining for haplotype reconstruction problem. Based on hamming distance and similarity between two fragments, our iterative algorithm produces two clusters of fragments; then, in each iteration, the algorithm assigns a fragment to one of the clusters. Our results suggest that the algorithm has less reconstruction error rate in comparison with other algorithms
Visual inspection technology in the hard disc drive industry
Muneesawang, Paisarn
2015-01-01
A presentation of the use of computer vision systems to control manufacturing processes and product quality in the hard disk drive industry. Visual Inspection Technology in the Hard Disk Drive Industry is an application-oriented book borne out of collaborative research with the world's leading hard disk drive companies. It covers the latest developments and important topics in computer vision technology in hard disk drive manufacturing, as well as offering a glimpse of future technologies.
Özyurt, Özcan
2015-01-01
Problem solving is an indispensable part of engineering. Improving critical thinking dispositions for solving engineering problems is one of the objectives of engineering education. In this sense, knowing critical thinking and problem solving skills of engineering students is of importance for engineering education. This study aims to determine…
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
is vital. Multi-criteria database search and Computer Aided Molecular Design(CAMD) can be applied to generate, test and evaluate promising pure component/mixture candidate as process fluids to help optimize cycle design and performance. The problem formulation for the development of novel working fluids...... is anadvanced CAMD challenge both in terms of data and computational demand, because includes process related as wellas property related equations.In CAMD problems the identification of target properties is often based on expert knowledge. To support identification of relevant target properties, in this study...... allows the ranking ofsignificance of properties and also the identification of a set of properties which are relevant for the design of a workingfluids.In this study the CAMD problem for the development of novel working fluids for organic Rankine cycles (ORC) isformulated as a mathematical optimization...
CO2 laser milling of hard tissue
Werner, Martin; Ivanenko, Mikhail; Harbecke, Daniela; Klasing, Manfred; Steigerwald, Hendrik; Hering, Peter
2007-02-01
Drilling of bone and tooth tissue belongs to recurrent medical procedures (screw- and pin-bores, bores for implant inserting, trepanation etc.). Small round bores can be in general quickly produced with mechanical drills. Problems arise however by angled drilling, by the necessity to fulfill the drilling without damaging of sensitive soft tissue beneath the bone, or by the attempt to mill precisely noncircular small cavities. We present investigations on laser hard tissue "milling", which can be advantageous for solving these problems. The "milling" is done with a CO2 laser (10.6 μm) with pulse duration of 50 - 100 μs, combined with a PC-controlled galvanic beam scanner and with a fine water-spray, which helps to avoid thermal side-effects. The damaging of underlying soft tissue can be prevented through control of the optical or acoustical ablation signal. The ablation of hard tissue is accompanied with a strong glowing, which is absent during the laser beam action on soft tissue. The acoustic signals from the diverse tissue types exhibit distinct differences in the spectral composition. Also computer image analysis could be a useful tool to control the operation. Laser "milling" of noncircular cavities with 1 - 4 mm width and about 10 mm depth is particularly interesting for dental implantology. In ex-vivo investigations we found conditions for fast laser "milling" of the cavities without thermal damage and with minimal tapering. It included exploration of different filling patterns (concentric rings, crosshatch, parallel lines and their combinations), definition of maximal pulse duration, repetition rate and laser power, optimal position of the spray. The optimized results give evidences for the applicability of the CO2 laser for biologically tolerable "milling" of deep cavities in the hard tissue.
International Nuclear Information System (INIS)
Nam, H; Stoitsov, M; Nazarewicz, W; Hagen, G; Kortelainen, M; Pei, J C; Bulgac, A; Maris, P; Vary, J P; Roche, K J; Schunck, N; Thompson, I; Wild, S M
2012-01-01
The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.
DESIGN OF EDUCATIONAL PROBLEMS ON LINEAR PROGRAMMING USING SYSTEMS OF COMPUTER MATHEMATICS
Directory of Open Access Journals (Sweden)
Volodymyr M. Mykhalevych
2013-11-01
Full Text Available From a perspective of the theory of educational problems a problem of substitution in the conditions of ICT use of one discipline by an educational problem of another discipline is represented. Through the example of mathematical problems of linear programming it is showed that a student’s method of operation in the course of an educational problem solving is determinant in the identification of an educational problem in relation to a specific discipline: linear programming, informatics, mathematical modeling, methods of optimization, automatic control theory, calculus etc. It is substantiated the necessity of linear programming educational problems renovation with the purpose of making students free of bulky similar arithmetic calculations and notes which often becomes a barrier to a deeper understanding of key ideas taken as a basis of algorithms used by them.
International Nuclear Information System (INIS)
Morimoto, Tsuyoshi; Nakijima, Yasuo; Iinuma, Gen; Arai, Yasuaki; Shiraishi, Junji; Moriyama, Noriyuki; Beddoe, G.
2008-01-01
The aim of this study was to evaluate the usefulness of computer-aided detection (CAD) in diagnosing early colorectal cancer using computed tomography colonography (CTC). A total of 30 CTC data sets for 30 early colorectal cancers in 30 patients were retrospectively reviewed by three radiologists. After primary evaluation, a second reading was performed using CAD findings. The readers evaluated each colorectal segment for the presence or absence of colorectal cancer using five confidence rating levels. To compare the assessment results, the sensitivity and specificity with and without CAD were calculated on the basis of the confidence rating, and differences in these variables were analyzed by receiver operating characteristic (ROC) analysis. The average sensitivities for the detection without and with CAD for the three readers were 81.6% and 75.6%, respectively. Among the three readers, only one reader improved sensitivity with CAD compared to that without. CAD decreased specificity in all three readers. CAD detected 100% of protruding lesions but only 69.2% of flat lesions. On ROC analysis, the diagnostic performance of all three readers was decreased by use of CAD. Currently available CAD with CTC does not improve diagnostic performance for detecting early colorectal cancer. An improved CAD algorithm is required for detecting fiat lesions and reducing the false-positive rate. (author)
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem
International Nuclear Information System (INIS)
Stoitsov, Mario; Nam, Hai Ah; Nazarewicz, Witold; Bulgac, Aurel; Hagen, Gaute; Kortelainen, E.M.; Pei, Junchen; Roche, K.J.; Schunck, N.; Thompson, I.; Vary, J.P.; Wild, S.
2011-01-01
The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.
Making Water Pollution a Problem in the Classroom Through Computer Assisted Instruction.
Flowers, John D.
Alternative means for dealing with water pollution control are presented for students and teachers. One computer oriented program is described in terms of teaching wastewater treatment and pollution concepts to middle and secondary school students. Suggestions are given to help teachers use a computer simulation program in their classrooms.…
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita
2009-12-01
Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.
Hills, J. G.
1992-06-01
Over 125,000 encounters between a hard binary with equal mass, components and orbital eccentricity of 0, and intruders with solar masses ranging from 0.01 to 10,000 are simulated. Each encounter was followed up to a maximum of 5 x 10 exp 6 integration steps to allow long-term 'resonances', temporary trinary systems, to break into a binary and a single star. These simulations were done over a range of impact parameters to find the cross sections for various processes occurring in these encounters. A critical impact parameter found in these simulations is the one beyond which no exchange collisions can occur. The energy exchange between the binary and a massive intruder decreases greatly in collisions with Rmin of not less than Rc. The semimajor axes and orbital eccentricity of the surviving binary also drops rapidly at Rc in encounters with massive intruders. The formation of temporary trinary systems is important for all intruder masses.
D'Onofrio, David J; An, Gary
2010-01-01
Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these pr...
Blandford, A. E.; Smith, P. R.
1986-01-01
Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…
Lin, John Jr-Hung; Lin, Sunny S. J.
2014-01-01
The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…
Mixed hybrid finite elements and streamline computation for the potential flow problem
Kaasschieter, E.F.; Huijben, A.J.M.
1992-01-01
An important class of problems in mathematical physics involves equations of the form -¿ · (A¿¿) = f. In a variety of problems it is desirable to obtain an accurate approximation of the flow quantity u = -A¿¿. Such an accurate approximation can be determined by the mixed finite element method. In
Fast parallel DNA-based algorithms for molecular computation: the set-partition problem.
Chang, Weng-Long
2007-12-01
This paper demonstrates that basic biological operations can be used to solve the set-partition problem. In order to achieve this, we propose three DNA-based algorithms, a signed parallel adder, a signed parallel subtractor and a signed parallel comparator, that formally verify our designed molecular solutions for solving the set-partition problem.
Backtrack Programming: A Computer-Based Approach to Group Problem Solving.
Scott, Michael D.; Bodaken, Edward M.
Backtrack problem-solving appears to be a viable alternative to current problem-solving methodologies. It appears to have considerable heuristic potential as a conceptual and operational framework for small group communication research, as well as functional utility for the student group in the small group class or the management team in the…
Lima, Ricardo
2016-06-16
This paper addresses the solution of a cardinality Boolean quadratic programming problem using three different approaches. The first transforms the original problem into six mixed-integer linear programming (MILP) formulations. The second approach takes one of the MILP formulations and relies on the specific features of an MILP solver, namely using starting incumbents, polishing, and callbacks. The last involves the direct solution of the original problem by solvers that can accomodate the nonlinear combinatorial problem. Particular emphasis is placed on the definition of the MILP reformulations and their comparison with the other approaches. The results indicate that the data of the problem has a strong influence on the performance of the different approaches, and that there are clear-cut approaches that are better for some instances of the data. A detailed analysis of the results is made to identify the most effective approaches for specific instances of the data. © 2016 Springer Science+Business Media New York
Lima, Ricardo; Grossmann, Ignacio E.
2016-01-01
This paper addresses the solution of a cardinality Boolean quadratic programming problem using three different approaches. The first transforms the original problem into six mixed-integer linear programming (MILP) formulations. The second approach takes one of the MILP formulations and relies on the specific features of an MILP solver, namely using starting incumbents, polishing, and callbacks. The last involves the direct solution of the original problem by solvers that can accomodate the nonlinear combinatorial problem. Particular emphasis is placed on the definition of the MILP reformulations and their comparison with the other approaches. The results indicate that the data of the problem has a strong influence on the performance of the different approaches, and that there are clear-cut approaches that are better for some instances of the data. A detailed analysis of the results is made to identify the most effective approaches for specific instances of the data. © 2016 Springer Science+Business Media New York
Energy Technology Data Exchange (ETDEWEB)
Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
International Nuclear Information System (INIS)
Sahni, D.C.; Sharma, A.
2000-01-01
The integral form of one-speed, spherically symmetric neutron transport equation with isotropic scattering is considered. Two standard problems are solved using normal mode expansion technique. The expansion coefficients are obtained by solving their singular integral equations. It is shown that these expansion coefficients provide a representation of all spherical harmonics moments of the angular flux as a superposition of Bessel functions. It is seen that large errors occur in the computation of higher moments unless we take certain precautions. The reasons for this phenomenon are explained. They throw some light on the failure of spherical harmonics method in treating spherical geometry problems as observed by Aronsson
Smith, Mike U.
1991-01-01
Criticizes an article by Browning and Lehman (1988) for (1) using "gene" instead of allele, (2) misusing the word "misconception," and (3) the possible influences of the computer environment on the results of the study. (PR)
Adaptive Radar Signal Processing-The Problem of Exponential Computational Cost
National Research Council Canada - National Science Library
Rangaswamy, Muralidhar
2003-01-01
.... Extensions to handle the case of non-Gaussian clutter statistics are presented. Current challenges of limited training data support, computational cost, and severely heterogeneous clutter backgrounds are outlined...
COMPUTER TOOLS OF DYNAMIC MATHEMATIC SOFTWARE AND METHODICAL PROBLEMS OF THEIR USE
Olena V. Semenikhina; Maryna H. Drushliak
2014-01-01
The article presents results of analyses of standard computer tools of dynamic mathematic software which are used in solving tasks, and tools on which the teacher can support in the teaching of mathematics. Possibility of the organization of experimental investigating of mathematical objects on the basis of these tools and the wording of new tasks on the basis of the limited number of tools, fast automated check are specified. Some methodological comments on application of computer tools and ...
Speed test results and hardware/software study of computational speed problem, appendix D
1984-01-01
The HP9845C is a desktop computer which is tested and evaluated for processing speed. A study was made to determine the availability and approximate cost of computers and/or hardware accessories necessary to meet the 20 ms sample period speed requirements. Additional requirements were that the control algorithm could be programmed in a high language and that the machine have sufficient storage to store the data from a complete experiment.
Impedance computations and beam-based measurements: A problem of discrepancy
Smaluk, Victor
2018-04-01
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.
International Nuclear Information System (INIS)
Gitinavard, Hossein; Mousavi, S. Meysam; Vahdani, Behnam
2017-01-01
In numerous real-world energy decision problems, decision makers often encounter complex environments, in which existent imprecise data and uncertain information lead us to make an appropriate decision. In this paper, a new soft computing group decision-making approach is introduced based on novel compromise ranking method and interval-valued hesitant fuzzy sets (IVHFSs) for energy decision-making problems under multiple criteria. In the proposed approach, the assessment information is provided by energy experts or decision makers based on interval-valued hesitant fuzzy elements under incomplete criteria weights. In this respect, a new ranking index is presented respecting to interval-valued hesitant fuzzy Hamming distance measure to prioritize energy candidates, and criteria weights are computed based on an extended maximizing deviation method by considering the preferences experts' judgments about the relative importance of each criterion. Also, a decision making trial and evaluation laboratory (DEMATEL) method is extended under an IVHF-environment to compute the interdependencies between and within the selected criteria in the hierarchical structure. Accordingly, to demonstrate the applicability of the presented approach a case study and a practical example are provided regarding to hierarchical structure and criteria interdependencies relations for renewable energy and energy policy selection problems. Hence, the obtained computational results are compared with a fuzzy decision-making method from the recent literature based on some comparison parameters to show the advantages and constraints of the proposed approach. Finally, a sensitivity analysis is prepared to indicate effects of different criteria weights on ranking results to present the robustness or sensitiveness of the proposed soft computing approach versus the relative importance of criteria. - Highlights: • Introducing a novel interval-valued hesitant fuzzy compromise ranking method. • Presenting
Nitsche's method for interface problems in computational mechanics
Energy Technology Data Exchange (ETDEWEB)
Hansbo, P. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Applied Mechanics
2005-07-01
We give a review of Nitsche's method applied to interface problems, involving real or artificial interfaces. Applications to unfitted meshes, Chimera meshes, cut meshes, fictitious domain methods, and model coupling are discussed. (orig.)
Computational methods for the nuclear and neutron matter problems: Final report
International Nuclear Information System (INIS)
Kalos, M.H.; Chen, J.M.C.
1988-01-01
This paper discusses the following topics: variational Monte Carlo study of oxygen 16; microscopic calculations of alpha-neutron scattering; exact Monte Carlo treatment of the fermion problem; and random field method
Analysis of Hard Thin Film Coating
Shen, Dashen
1998-01-01
MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Kim, Nam Ju
2017-01-01
This multiple paper dissertation addressed several issues in Problem-based learning (PBL) through conceptual analysis, meta-analysis, and empirical research. PBL is characterized by ill-structured tasks, self-directed learning process, and a combination of individual and cooperative learning activities. Students who lack content knowledge and problem-solving skills may struggle to address associated tasks that are beyond their current ability levels in PBL. This dissertation addressed a) scaf...
Computation of optimal transport and related hedging problems via penalization and neural networks
Eckstein, Stephan; Kupper, Michael
2018-01-01
This paper presents a widely applicable approach to solving (multi-marginal, martingale) optimal transport and related problems via neural networks. The core idea is to penalize the optimization problem in its dual formulation and reduce it to a finite dimensional one which corresponds to optimizing a neural network with smooth objective function. We present numerical examples from optimal transport, martingale optimal transport, portfolio optimization under uncertainty and generative adversa...
Towards high-performance symbolic computing using MuPAD as a problem solving environment
Sorgatz, A
1999-01-01
This article discusses the approach of developing MuPAD into an open and parallel problem solving environment for mathematical applications. It introduces the key technologies domains and dynamic modules and describes the current $9 state of macro parallelism which covers three fields of parallel programming: message passing, network variables and work groups. First parallel algorithms and examples of using the prototype of the MuPAD problem solving environment $9 are demonstrated. (12 refs).
Directory of Open Access Journals (Sweden)
Jianfei Zhang
2013-01-01
Full Text Available Graphics processing unit (GPU has obtained great success in scientific computations for its tremendous computational horsepower and very high memory bandwidth. This paper discusses the efficient way to implement polynomial preconditioned conjugate gradient solver for the finite element computation of elasticity on NVIDIA GPUs using compute unified device architecture (CUDA. Sliced block ELLPACK (SBELL format is introduced to store sparse matrix arising from finite element discretization of elasticity with fewer padding zeros than traditional ELLPACK-based formats. Polynomial preconditioning methods have been investigated both in convergence and running time. From the overall performance, the least-squares (L-S polynomial method is chosen as a preconditioner in PCG solver to finite element equations derived from elasticity for its best results on different example meshes. In the PCG solver, mixed precision algorithm is used not only to reduce the overall computational, storage requirements and bandwidth but to make full use of the capacity of the GPU devices. With SBELL format and mixed precision algorithm, the GPU-based L-S preconditioned CG can get a speedup of about 7–9 to CPU-implementation.
Application of a fast skyline computation algorithm for serendipitous searching problems
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
Solving math and science problems in the real world with a computational mind
Directory of Open Access Journals (Sweden)
Juan Carlos Olabe
2014-07-01
Full Text Available This article presents a new paradigm for the study of Math and Sciences curriculum during primary and secondary education. A workshop for Education undergraduates at four different campuses (n=242 was designed to introduce participants to the new paradigm. In order to make a qualitative analysis of the current school methodologies in mathematics, participants were introduced to a taxonomic tool for the description of K-12 Math problems. The tool allows the identification, decomposition and description of Type-A problems, the characteristic ones in the traditional curriculum, and of Type-B problems in the new paradigm. The workshops culminated with a set of surveys where participants were asked to assess both the current and the new proposed paradigms. The surveys in this study revealed that according to the majority of participants: (i The K-12 Mathematics curricula are designed to teach students exclusively the resolution of Type-A problems; (ii real life Math problems respond to a paradigm of Type-B problems; and (iii the current Math curriculum should be modified to include this new paradigm.
DEFF Research Database (Denmark)
Iris, Cagatay; Røpke, Stefan; Pacino, Dario
,74% 63,0 X 7,97% 600* 68,0 67,9 0,19% 600 8 56,1 56,1 0,00% 51,3 X 9,42% 600* 56,1 55,8 0,59% 600* 9 75,1 75,0 0,13% 74,9 X 0,33% 600* 75,1 75,0 0,15% 600* BACAP can also be formulated as Generalized Set Partitioning Problem (GSPP). The model is an extension of the BAP formulation in [2] where we add new...... problem (BAP) and the quay crane assignment problem (QCAP). Such integrated problem is known in the literature ([1]) as the Berth Allocation and Crane Assignment Problem (BACAP). The state-of-the-art [1] models this problem using two decision variables X_ij and Y_ij, representing respectively the partial...... since start-time variables S_i are integrated into objective function. Inequality (1) is based on the following two observations. First, if vessel i berths before vessel j (X_ij=1), then the start time of vessel j (Sj) should be larger than the start time of vessel i plus its minimum expected processing...
Al Rashidi, Sultan H; Alhumaidan, H
2017-01-01
Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for 1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration.
Heuristics for the Buffer Allocation Problem with Collision Probability Using Computer Simulation
Directory of Open Access Journals (Sweden)
Eishi Chiba
2015-01-01
Full Text Available The standard manufacturing system for Flat Panel Displays (FPDs consists of a number of pieces of equipment in series. Each piece of equipment usually has a number of buffers to prevent collision between glass substrates. However, in reality, very few of these buffers seem to be used. This means that redundant buffers exist. In order to reduce cost and space necessary for manufacturing, the number of buffers should be minimized with consideration of possible collisions. In this paper, we focus on an in-line system in which each piece of equipment can have any number of buffers. In this in-line system, we present a computer simulation method for the computation of the probability of a collision occurring. Based on this method, we try to find a buffer allocation that achieves the smallest total number of buffers under an arbitrarily specified collision probability. We also implement our proposed method and present some computational results.
International Nuclear Information System (INIS)
Abdel Gwad, A.S.A.
2008-01-01
The large-scale industrialization of the 20 th century produced both positive and negative effects. One of the negative effects was the production of hazardous wastes as a result of manufacturing processes. Preventing, recycling, and disposing of these wastes in an environmentally sustainable manner presents a challenge for the 21 st century.The hazardous waste management (HWM) problem is defined as the combined decision of selecting the disposal method, sitting the disposal plants and deciding on the waste flow structure. The hazardous waste management problem has additional requirements depending on the selected disposal method. Each disposal method has different requirements. For land disposal, the main issue is leachate. The land disposal facility should guarantee the safety of nearby groundwater. For incineration plants, the main requirement is the satisfaction of air pollution standards at population centers.The thesis consists of five chapters: Chapter I presents a survey on some important concepts of the hazardous waste management problem. Chapter II introduces three different approaches for treating the HWM problem. In the first approach a mathematical model for treating HWM problem. Where the incineration process is used as a tool for disposing the hazardous waste is presented. The second one discusses a multi objective location-routing model for HWM problem. The third approach is concerned with present a waste fuel blending approach for treating the hazardous waste problem using goal programming. Chapter III presents three different applications for waste management problem. The first application is concerned with waste collection in Southern Italy. The second discusses the management of solid waste in the city of Regina, Canada, by using an interval parameter technique. The third one introduces the planning of waste management systems with economies of scale for the region of Hamilton, Ontario, Canada. Chapter IV is devoted to an optimization the model
Data management problems with a distributed computer network on nuclear power stations
International Nuclear Information System (INIS)
Davis, I.
1980-01-01
It is generally accepted within the Central Electricity Generating Board that the centralized process computers at some nuclear power plants are going to be replaced with distributed systems. Work on the theoretical considerations involved in such a replacement, including the allocation of data within the system, is going on with the goal of developing a simple, pragmatic approach to the determination of the required system resilience. A flexible network architecture which can accomodate expansions in the future and can be understood by non-computer specialists can thus be built up. (LL)
Computational issues and algorithm assessment for shock/turbulence interaction problems
International Nuclear Information System (INIS)
Larsson, J; Cook, A; Lele, S K; Moin, P; Cabot, B; Sjoegreen, B; Yee, H; Zhong, X
2007-01-01
The paper provides an overview of the challenges involved in the computation of flows with interactions between turbulence, strong shockwaves, and sharp density interfaces. The prediction and physics of such flows is the focus of an ongoing project in the Scientific Discovery through Advanced Computing (SciDAC) program. While the project is fundamental in nature, there are many important potential applications of scientific and engineering interest ranging from inertial confinement fusion to exploding supernovae. The essential challenges will be discussed, and some representative numerical results that highlight these challenges will be shown. In addition, the overall approach taken in this project will be outlined
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Jasim Mohammed, M; Ibrahim, Rabha W; Ahmad, M Z
2017-03-01
In this paper, we consider a low initial population model. Our aim is to study the periodicity computation of this model by using neutral differential equations, which are recognized in various studies including biology. We generalize the neutral Rayleigh equation for the third-order by exploiting the model of fractional calculus, in particular the Riemann-Liouville differential operator. We establish the existence and uniqueness of a periodic computational outcome. The technique depends on the continuation theorem of the coincidence degree theory. Besides, an example is presented to demonstrate the finding.
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and
Tseng, Min-chen
2014-01-01
This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…
Cognitive processes in solving variants of computer-based problems used in logic teaching
Eysink, Tessa H.S.; Dijkstra, S.; Kuper, Jan
2001-01-01
The effect of two instructional variables, visualisation and manipulation of objects, in learning to use the logical connective, conditional, was investigated. Instructions for 66 first- year social science students were varied in the computer-based learning environment Tarski's World, designed for
Computer Solution of the Two-Dimensional Tether Ball: Problem to Illustrate Newton's Second Law.
Zimmerman, W. Bruce
Force diagrams involving angular velocity, linear velocity, centripetal force, work, and kinetic energy are given with related equations of motion expressed in polar coordinates. The computer is used to solve differential equations, thus reducing the mathematical requirements of the students. An experiment is conducted using an air table to check…
Computer-based tests: The impact of test design and problem of equivalency
Czech Academy of Sciences Publication Activity Database
Květon, Petr; Jelínek, Martin; Vobořil, Dalibor; Klimusová, H.
-, č. 23 (2007), s. 32-51 ISSN 0747-5632 R&D Projects: GA ČR(CZ) GA406/99/1052; GA AV ČR(CZ) KSK9058117 Institutional research plan: CEZ:AV0Z7025918 Keywords : Computer-based assessment * speeded test * equivalency Subject RIV: AN - Psychology Impact factor: 1.344, year: 2007
Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis
Lovett, Andrew; Forbus, Kenneth
2011-01-01
A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…
Timmers, Caroline; Walraven, Amber; Veldkamp, Bernard P.
2015-01-01
This study examines the effect of regulation feedback in a computer-based formative assessment in the context of searching for information online. Fifty 13-year-old students completed two randomly selected assessment tasks, receiving automated regulation feedback between them. Student performance
Science of consciousness and the hard problem
Energy Technology Data Exchange (ETDEWEB)
Stapp, H.P.
1996-05-22
Quantum theory is essentially a rationally coherent theory of the interaction of mind and matter, and it allows our conscious thoughts to play a causally efficacious and necessary role in brain dynamics. It therefore provides a natural basis, created by scientists, for the science of consciousness. As an illustration it is explained how the interaction of brain and consciousness can speed up brain processing, and thereby enhance the survival prospects of conscious organisms, as compared to similar organisms that lack consciousness. As a second illustration it is explained how, within the quantum framework, the consciously experienced {open_quotes}I{close_quotes} directs the actions of a human being. It is concluded that contemporary science already has an adequate framework for incorporating causally efficacious experimential events into the physical universe in a manner that: (1) puts the neural correlates of consciousness into the theory in a well defined way, (2) explains in principle how the effects of consciousness, per se, can enhance the survival prospects of organisms that possess it, (3) allows this survival effect to feed into phylogenetic development, and (4) explains how the consciously experienced {open_quotes}I{close_quotes} can direct human behaviour.
Butler, Stephen F; Villapiano, Albert; Malinow, Andrew
2009-12-01
People tend to disclose more personal information when communication is mediated through the use of a computer. This study was conducted to examine the impact of this phenomenon on the way respondents answer questions during computer-mediated, self-administration of the Addiction Severity Index (ASI) called the Addiction Severity Index-Multimedia Version((R)) (ASI-MV((R))). A sample of 142 clients in substance abuse treatment was administered the ASI via an interviewer and the computerized ASI-MV((R)), three to five days apart in a counterbalanced order. Seven composite scores were compared between the two test administrations using paired t-tests. Post hoc analyses examined interviewer effects. Comparisons of composite scores for each of the domains between the face-to-face administered and computer-mediated, self-administered ASI revealed that significantly greater problem severity was reported by clients in five of the seven domains during administration of the computer-mediated, self-administered version compared to the trained interviewer version. Item analyses identified certain items as responsible for significant differences, especially those asking clients to rate need for treatment. All items that were significantly different between the two modes of administration revealed greater problem severity reported on the ASI-MV((R)) as compared to the interview administered assessment. Post hoc analyses yielded significant interviewer effects on four of the five domains where differences were observed. These data support a growing literature documenting a tendency for respondents to be more self-disclosing in a computer-mediated format over a face-to-face interview. Differences in interviewer skill in establishing rapport may account for these observations.
Development of Procedures to Assess Problem-Solving Competence in Computing Engineering
Pérez, Jorge; Vizcarro, Carmen; García, Javier; Bermúdez, Aurelio; Cobos, Ruth
2017-01-01
In the context of higher education, a competence may be understood as the combination of skills, knowledge, attitudes, values, and abilities that underpin effective and/or superior performance in a professional area. The aim of the work reported here was to design a set of procedures to assess a transferable competence, i.e., problem solving, that…
Using Dynamic Geometry and Computer Algebra Systems in Problem Based Courses for Future Engineers
Tomiczková, Svetlana; Lávicka, Miroslav
2015-01-01
It is a modern trend today when formulating the curriculum of a geometric course at the technical universities to start from a real-life problem originated in technical praxis and subsequently to define which geometric theories and which skills are necessary for its solving. Nowadays, interactive and dynamic geometry software plays a more and more…
International Nuclear Information System (INIS)
LUCCIO, A.U.; DIMPERIO, N.L.; SAMULYAK, R.; BEEB-WANG, J.
2001-01-01
Simulation of high intensity accelerators leads to the solution of the Poisson Equation, to calculate space charge forces in the presence of acceleration chamber walls. We reduced the problem to ''two-and-a-half'' dimensions for long particle bunches, characteristic of large circular accelerators, and applied the results to the tracking code Orbit
The Effect of Simulation Games on the Learning of Computational Problem Solving
Liu, Chen-Chung; Cheng, Yuan-Bang; Huang, Chia-Wen
2011-01-01
Simulation games are now increasingly applied to many subject domains as they allow students to engage in discovery processes, and may facilitate a flow learning experience. However, the relationship between learning experiences and problem solving strategies in simulation games still remains unclear in the literature. This study, thus, analyzed…
The Cross-Contextual Transfer of Problem Solving Strategies from Logo to Non-Computer Domains.
Swan, Karen; Black, John B.
This report investigated the relationship between learning to program LOGO and the development of problem solving skills. Subjects were 133 students in grades 4-8 who had at least 30 hours of experience with both graphics and lists programming in Logo. Students were randomly assigned to one of three contextual groupings, which received graphics,…
Schoppek, Wolfgang; Tulis, Maria
2010-01-01
The fluency of basic arithmetical operations is a precondition for mathematical problem solving. However, the training of skills plays a minor role in contemporary mathematics instruction. The authors proposed individualization of practice as a means to improve its efficiency, so that the time spent with the training of skills is minimized. As a…
Individual Differences in Strategy Use on Division Problems: Mental versus Written Computation
Hickendorff, Marian; van Putten, Cornelis M.; Verhelst, Norman D.; Heiser, Willem J.
2010-01-01
Individual differences in strategy use (choice and accuracy) were analyzed. A sample of 362 Grade 6 students solved complex division problems under 2 different conditions. In the choice condition students were allowed to use either a mental or a written strategy. In the subsequent no-choice condition, they were required to use a written strategy.…
An Examination of the Relationship between Computation, Problem Solving, and Reading
Cormier, Damien C.; Yeo, Seungsoo; Christ, Theodore J.; Offrey, Laura D.; Pratt, Katherine
2016-01-01
The purpose of this study is to evaluate the relationship of mathematics calculation rate (curriculum-based measurement of mathematics; CBM-M), reading rate (curriculum-based measurement of reading; CBM-R), and mathematics application and problem solving skills (mathematics screener) among students at four levels of proficiency on a statewide…
On Problem Based Learning and Application to Computer Games Design Teaching
DEFF Research Database (Denmark)
Timcenko, Olga; Stojic, Radoslav
2012-01-01
Problem-based learning is a pedagogical approach which started in early 1970s. It is well developed and established until now. Aalborg University in Denmark is one of pioneering world universities in PBL and has accumulated a huge experience in PBL for many different study lines. One of them is M...
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Distribution of Software Changes for Battlefield Computer Systems: A lingering Problem
1983-06-03
Defense, 10 June 1963), pp. 1-4. 3 Ibid. 4Automatic Data Processing Systems, Book - 1 Introduction (U.S. Army Signal School, Fort Monmouth, New Jersey, 15...January 1960) , passim. 5Automatic Data Processing Systems, Book - 2 Army Use of ADPS (U.S. Army Signal School, Fort Monmouth, New Jersey, 15 October...execute an application or utility program. It controls how the computer functions during a given operation. Utility programs are merely general use
Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly
2014-05-01
Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological
Quantum elastic net and the traveling salesman problem
International Nuclear Information System (INIS)
Kostenko, B.F.; Pribis, J.; Yur'ev, M.Z.
2009-01-01
Theory of computer calculations strongly depends on the nature of elements the computer is made of. Quantum interference allows one to formulate the Shor factorization algorithm turned out to be more effective than any one written for classical computers. Similarly, quantum wave packet reduction allows one to devise the Grover search algorithm which outperforms any classical one. In the present paper we argue that the quantum incoherent tunneling can be used for elaboration of new algorithms able to solve some NP-hard problems, such as the traveling Salesman Problem, considered to be intractable in the classical theory of computer computations
Directory of Open Access Journals (Sweden)
Harold Germán Rodríguez Celis
2011-12-01
Full Text Available This study was designed to identify the relationship between video and computergames use on attention, memory, academic performance and problemsbehavior in school children in Bogotá. Memory and attention were assessedusing a set of different scales of ENI Battery (Matute, Rosselli, Ardila, & Ostrosky-Solís, 2007. For Academic performance, school newsletters were used.Behavioral problems were assessed through the CBCL / 6 -18 questionnaire(Child Behavior Checklist of (Achenbach & Edelbrock, 1983. 123 children and99 parents were enrolled in 2 factorial design experimental studies. The resultsdid not support the hypothesis of a significant change in memory tests, orintra-subject selective visual and hearing attention. However, these variablesshowed significant differences among children exposed to habitual videogamesconsumption. No differences were found between the level of regular videogames consumption in school children and academic performance variables orbehavioral problems.
Leibov Roman
2017-01-01
This paper presents a bilinear approach to nonlinear differential equations system approximation problem. Sometimes the nonlinear differential equations right-hand sides linearization is extremely difficult or even impossible. Then piecewise-linear approximation of nonlinear differential equations can be used. The bilinear differential equations allow to improve piecewise-linear differential equations behavior and reduce errors on the border of different linear differential equations systems ...
Kim, Nam Ju
This multiple paper dissertation addressed several issues in Problem-based learning (PBL) through conceptual analysis, meta-analysis, and empirical research. PBL is characterized by ill-structured tasks, self-directed learning process, and a combination of individual and cooperative learning activities. Students who lack content knowledge and problem-solving skills may struggle to address associated tasks that are beyond their current ability levels in PBL. This dissertation addressed a) scaffolding characteristics (i.e., scaffolding types, delivery method, customization) and their effects on students' perception of optimal challenge in PBL, b) the possibility of virtual learning environments for PBL, and c) the importance of information literacy for successful PBL learning. Specifically, this dissertation demonstrated the effectiveness of scaffolding customization (i.e., fading, adding, and fading/adding) to enhance students' self-directed learning in PBL. Moreover, the effectiveness of scaffolding was greatest when scaffolding customization is self-selected than based on fixed-time interval and their performance. This suggests that it might be important for students to take responsibility for their learning in PBL and individualized and just-in-time scaffolding can be one of the solutions to address K-12 students' difficulties in improving problem-solving skills and adjusting to PBL.
Application of the TEMPEST computer code to canister-filling heat transfer problems
International Nuclear Information System (INIS)
Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.
1988-03-01
Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch filling mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs
International Nuclear Information System (INIS)
Tselios, Kostas; Simos, T.E.
2007-01-01
In this Letter a new explicit fourth-order seven-stage Runge-Kutta method with a combination of minimal dispersion and dissipation error and maximal accuracy and stability limit along the imaginary axes, is developed. This method was produced by a general function that was constructed to satisfy all the above requirements and, from which, all the existing fourth-order six-stage RK methods can be produced. The new method is more efficient than the other optimized methods, for acoustic computations
International Nuclear Information System (INIS)
Edwards, R A
2008-01-01
New high-throughput DNA sequencing technologies have revolutionized how scientists study the organisms around us. In particular, microbiology - the study of the smallest, unseen organisms that pervade our lives - has embraced these new techniques to characterize and analyze the cellular constituents and use this information to develop novel tools, techniques, and therapeutics. So-called next-generation DNA sequencing platforms have resulted in huge increases in the amount of raw data that can be rapidly generated. Argonne National Laboratory developed the premier platform for the analysis of this new data (mg-rast) that is used by microbiologists worldwide. This paper uses the accounting from the computational analysis of more than 10,000,000,000 bp of DNA sequence data, describes an analysis of the advanced computational requirements, and suggests the level of analysis that will be essential as microbiologists move to understand how these tiny organisms affect our every day lives. The results from this analysis indicate that data analysis is a linear problem, but that most analyses are held up in queues. With sufficient resources, computations could be completed in a few hours for a typical dataset. These data also suggest execution times that delimit timely completion of computational analyses, and provide bounds for problematic processes
Li, Kenli; Zou, Shuting; Xv, Jin
2008-01-01
Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.
Directory of Open Access Journals (Sweden)
Suheel Abdullah Malik
2014-01-01
Full Text Available We present a hybrid heuristic computing method for the numerical solution of nonlinear singular boundary value problems arising in physiology. The approximate solution is deduced as a linear combination of some log sigmoid basis functions. A fitness function representing the sum of the mean square error of the given nonlinear ordinary differential equation (ODE and its boundary conditions is formulated. The optimization of the unknown adjustable parameters contained in the fitness function is performed by the hybrid heuristic computation algorithm based on genetic algorithm (GA, interior point algorithm (IPA, and active set algorithm (ASA. The efficiency and the viability of the proposed method are confirmed by solving three examples from physiology. The obtained approximate solutions are found in excellent agreement with the exact solutions as well as some conventional numerical solutions.
Energy Technology Data Exchange (ETDEWEB)
Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.
1996-05-01
The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.