Adaptive Simulated Annealing Based Protein Loop Modeling of Neurotoxins
Institute of Scientific and Technical Information of China (English)
陈杰; 黄丽娜; 彭志红
2003-01-01
A loop modeling method, adaptive simulated annealing, for ab initio prediction of protein loop structures, as an optimization problem of searching the global minimum of a given energy function, is proposed. An interface-friendly toolbox-LoopModeller in Windows and Linux systems, VC++ and OpenGL environments is developed for analysis and visualization. Simulation results of three short-chain neurotoxins modeled by LoopModeller show that the method proposed is fast and efficient.
An adaptive approach to the physical annealing strategy for simulated annealing
Hasegawa, M.
2013-02-01
A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.
Energy Technology Data Exchange (ETDEWEB)
Sheng, Zheng, E-mail: 19994035@sina.com [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Wang, Jun; Zhou, Bihua [National Defense Key Laboratory on Lightning Protection and Electromagnetic Camouflage, PLA University of Science and Technology, Nanjing 210007 (China); Zhou, Shudao [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044 (China)
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Ry, Rexha Verdhora; Nugraha, Andri Dian
2015-04-01
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger's method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger's result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Energy Technology Data Exchange (ETDEWEB)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com [Master Program of Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia); Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id [Global Geophysical Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia)
2015-04-24
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Adaptive MANET multipath routing algorithm based on the simulated annealing approach.
Kim, Sungwook
2014-01-01
Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach
Directory of Open Access Journals (Sweden)
Sungwook Kim
2014-01-01
Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Stochastic Global Optimization and Its Applications with Fuzzy Adaptive Simulated Annealing
Aguiar e Oliveira Junior, Hime; Petraglia, Antonio; Rembold Petraglia, Mariane; Augusta Soares Machado, Maria
2012-01-01
Stochastic global optimization is a very important subject, that has applications in virtually all areas of science and technology. Therefore there is nothing more opportune than writing a book about a successful and mature algorithm that turned out to be a good tool in solving difficult problems. Here we present some techniques for solving several problems by means of Fuzzy Adaptive Simulated Annealing (Fuzzy ASA), a fuzzy-controlled version of ASA, and by ASA itself. ASA is a sophisticated global optimization algorithm that is based upon ideas of the simulated annealing paradigm, coded in the C programming language and developed to statistically find the best global fit of a nonlinear constrained, non-convex cost function over a multi-dimensional space. By presenting detailed examples of its application we want to stimulate the reader’s intuition and make the use of Fuzzy ASA (or regular ASA) easier for everyone wishing to use these tools to solve problems. We kept formal mathematical requirements to a...
Schneider, Johannes J.; Puchta, Markus
2010-12-01
Simulated annealing is the classic physical optimization algorithm, which has been applied to a large variety of problems for many years. Over time, several adaptive mechanisms for decreasing the temperature and thus controlling the acceptance of deteriorations have been developed, based on the measurement of the mean value and the variance of the energy. Here we propose a new simplified approach in which we consider the probability of accepting deteriorations as the main control parameter and derive the temperature by averaging over the last few deteriorations stored in a memory. We present results for the traveling salesman problem and demonstrate, how the amount of data retained influences both the cooling schedule and the quality of the results.
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.
Keystream Generator Based On Simulated Annealing
Directory of Open Access Journals (Sweden)
Ayad A. Abdulsalam
2011-01-01
Full Text Available Advances in the design of keystream generator using heuristic techniques are reported. A simulated annealing algorithm for generating random keystream with large complexity is presented. Simulated annealing technique is adapted to locate these requirements. The definitions for some cryptographic properties are generalized, providing a measure suitable for use as an objective function in a simulated annealing algorithm, seeking randomness that satisfy both correlation immunity and the large linear complexity. Results are presented demonstrating the effectiveness of the method.
multicast utilizando Simulated Annealing
Directory of Open Access Journals (Sweden)
Yezid Donoso
2005-01-01
Full Text Available En este artículo se presenta un método de optimización multiobjetivo para la solución del problema de balanceo de carga en redes de transmisión multicast, apoyándose en la aplicación de la meta-heurística de Simulated Annealing (Recocido Simulado. El método minimiza cuatro parámetros básicos para garantizar la calidad de servicio en transmisiones multicast: retardo origen destino, máxima utilización de enlaces, ancho de banda consumido y número de saltos. Los resultados devueltos por la heurística serán comparados con los resultados arrojados por el modelo matemático propuesto en investigaciones anteriores.
Recursive simulation of quantum annealing
Sowa, A P; Samson, J H; Savel'ev, S E; Zagoskin, A M; Heidel, S; Zúñiga-Anaya, J C
2015-01-01
The evaluation of the performance of adiabatic annealers is hindered by lack of efficient algorithms for simulating their behaviour. We exploit the analyticity of the standard model for the adiabatic quantum process to develop an efficient recursive method for its numerical simulation in case of both unitary and non-unitary evolution. Numerical simulations show distinctly different distributions for the most important figure of merit of adiabatic quantum computing --- the success probability --- in these two cases.
Feasibility of Simulated Annealing Tomography
Vo, Nghia T; Moser, Herbert O
2014-01-01
Simulated annealing tomography (SAT) is a simple iterative image reconstruction technique which can yield a superior reconstruction compared with filtered back-projection (FBP). However, the very high computational cost of iteratively calculating discrete Radon transform (DRT) has limited the feasibility of this technique. In this paper, we propose an approach based on the pre-calculated intersection lengths array (PILA) which helps to remove the step of computing DRT in the simulated annealing procedure and speed up SAT by over 300 times. The enhancement of convergence speed of the reconstruction process using the best of multiple-estimate (BoME) strategy is introduced. The performance of SAT under different conditions and in comparison with other methods is demonstrated by numerical experiments.
Residual entropy and simulated annealing
Ettelaie, R.; Moore, M. A.
1985-01-01
Determining the residual entropy in the simulated annealing approach to optimization is shown to provide useful information on the true ground state energy. The one-dimensional Ising spin glass is studied to exemplify the procedure and in this case the residual entropy is related to the number of one-spin flip stable metastable states. The residual entropy decreases to zero only logarithmically slowly with the inverse cooling rate.
Energy Technology Data Exchange (ETDEWEB)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
Cylinder packing by simulated annealing
Directory of Open Access Journals (Sweden)
M. Helena Correia
2000-12-01
Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Quantum Adiabatic Evolution Algorithms versus Simulated Annealing
Farhi, E; Gutmann, S; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam
2002-01-01
We explain why quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time.
An Application of Simulated Annealing to Scheduling Army Unit Training
1986-10-01
Simulated annealing operates by analogy to the metalurgy process which strengthens metals through successive heating and cooling. The method is highly...diminishing returns is observed. The simulated annealing heuristic operates by analogy to annealing in physical systems. Annealing in a physical
A simulated annealing technique for multi-objective simulation optimization
Mahmoud H. Alrefaei; Diabat, Ali H.
2009-01-01
In this paper, we present a simulated annealing algorithm for solving multi-objective simulation optimization problems. The algorithm is based on the idea of simulated annealing with constant temperature, and uses a rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge almost surely to an optimal solution. It is applied to a multi-objective inventory problem; the numerical results show that the algorithm ...
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Binary Sparse Phase Retrieval via Simulated Annealing
Directory of Open Access Journals (Sweden)
Wei Peng
2016-01-01
Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Comparative study of the performance of quantum annealing and simulated annealing.
Nishimori, Hidetoshi; Tsuda, Junichi; Knysh, Sergey
2015-01-01
Relations of simulated annealing and quantum annealing are studied by a mapping from the transition matrix of classical Markovian dynamics of the Ising model to a quantum Hamiltonian and vice versa. It is shown that these two operators, the transition matrix and the Hamiltonian, share the eigenvalue spectrum. Thus, if simulated annealing with slow temperature change does not encounter a difficulty caused by an exponentially long relaxation time at a first-order phase transition, the same is true for the corresponding process of quantum annealing in the adiabatic limit. One of the important differences between the classical-to-quantum mapping and the converse quantum-to-classical mapping is that the Markovian dynamics of a short-range Ising model is mapped to a short-range quantum system, but the converse mapping from a short-range quantum system to a classical one results in long-range interactions. This leads to a difference in efficiencies that simulated annealing can be efficiently simulated by quantum annealing but the converse is not necessarily true. We conclude that quantum annealing is easier to implement and is more flexible than simulated annealing. We also point out that the present mapping can be extended to accommodate explicit time dependence of temperature, which is used to justify the quantum-mechanical analysis of simulated annealing by Somma, Batista, and Ortiz. Additionally, an alternative method to solve the nonequilibrium dynamics of the one-dimensional Ising model is provided through the classical-to-quantum mapping.
MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING
Directory of Open Access Journals (Sweden)
Ladislav Rosocha
2015-07-01
Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.
A Parallel Genetic Simulated Annealing Hybrid Algorithm for Task Scheduling
Institute of Scientific and Technical Information of China (English)
SHU Wanneng; ZHENG Shijue
2006-01-01
In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing .It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively.When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole.From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.
Hierarchical Network Design Using Simulated Annealing
DEFF Research Database (Denmark)
Thomadsen, Tommy; Clausen, Jens
2002-01-01
The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....
Remote sensing of atmospheric duct parameters using simulated annealing
Institute of Scientific and Technical Information of China (English)
Zhao Xiao-Feng; Huang Si-Xun; Xiang Jie; Shi Wei-Lai
2011-01-01
Simulated annealing is one of the robust optimization schemes. Simulated annealing mimics the annealing process of the slow cooling of a heated metal to reach a stable minimum energy state. In this paper,we adopt simulated annealing to study the problem of the remote sensing of atmospheric duct parameters for two different geometries of propagation measurement. One is from a single emitter to an array of radio receivers (vertical measurements),and the other is from the radar clutter returns (horizontal measurements). Basic principles of simulated annealing and its applications to refractivity estimation are introduced. The performance of this method is validated using numerical experiments and field measurements collected at the East China Sea. The retrieved results demonstrate the feasibility of simulated annealing for near real-time atmospheric refractivity estimation. For comparison,the retrievals of the genetic algorithm are also presented. The comparisons indicate that the convergence speed of simulated annealing is faster than that of the genetic algorithm,while the anti-noise ability of the genetic algorithm is better than that of simulated annealing.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Shi-hua Zhan
2016-01-01
Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Kriging-approximation simulated annealing algorithm for groundwater modeling
Shen, C. H.
2015-12-01
Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.
A NEW GENETIC SIMULATED ANNEALING ALGORITHM FOR FLOOD ROUTING MODEL
Institute of Scientific and Technical Information of China (English)
KANG Ling; WANG Cheng; JIANG Tie-bing
2004-01-01
In this paper, a new approach, the Genetic Simulated Annealing (GSA), was proposed for optimizing the parameters in the Muskingum routing model. By integrating the simulated annealing method into the genetic algorithm, the hybrid method could avoid some troubles of traditional methods, such as arduous trial-and-error procedure, premature convergence in genetic algorithm and search blindness in simulated annealing. The principle and implementing procedure of this algorithm were described. Numerical experiments show that the GSA can adjust the optimization population, prevent premature convergence and seek the global optimal result.Applications to the Nanyunhe River and Qingjiang River show that the proposed approach is of higher forecast accuracy and practicability.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry.
SIMULATED ANNEALING BASED POLYNOMIAL TIME QOS ROUTING ALGORITHM FOR MANETS
Institute of Scientific and Technical Information of China (English)
Liu Lianggui; Feng Guangzeng
2006-01-01
Multi-constrained Quality-of-Service (QoS) routing is a big challenge for Mobile Ad hoc Networks (MANETs) where the topology may change constantly. In this paper a novel QoS Routing Algorithm based on Simulated Annealing (SA_RA) is proposed. This algorithm first uses an energy function to translate multiple QoS weights into a single mixed metric and then seeks to find a feasible path by simulated annealing. The paper outlines simulated annealing algorithm and analyzes the problems met when we apply it to Qos Routing (QoSR) in MANETs. Theoretical analysis and experiment results demonstrate that the proposed method is an effective approximation algorithms showing better performance than the other pertinent algorithm in seeking the (approximate) optimal configuration within a period of polynomial time.
A theoretical comparison of evolutionary algorithms and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
Coordination Hydrothermal Interconnection Java-Bali Using Simulated Annealing
Wicaksono, B.; Abdullah, A. G.; Saputra, W. S.
2016-04-01
Hydrothermal power plant coordination aims to minimize the total cost of operating system that is represented by fuel costand constraints during optimization. To perform the optimization, there are several methods that can be used. Simulated Annealing (SA) is a method that can be used to solve the optimization problems. This method was inspired by annealing or cooling process in the manufacture of materials composed of crystals. The basic principle of hydrothermal power plant coordination includes the use of hydro power plants to support basic load while thermal power plants were used to support the remaining load. This study used two hydro power plant units and six thermal power plant units with 25 buses by calculating transmission losses and considering power limits in each power plant unit aided by MATLAB software during the process. Hydrothermal power plant coordination using simulated annealing plants showed that a total cost of generation for 24 hours is 13,288,508.01.
Analysis of Trivium by a Simulated Annealing variant
DEFF Research Database (Denmark)
Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian
2010-01-01
. A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...
Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing
Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.
2006-01-01
The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval arithm
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
Molecular dynamics simulation of annealed ZnO surfaces
Energy Technology Data Exchange (ETDEWEB)
Min, Tjun Kit; Yoon, Tiem Leong [School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Lim, Thong Leng [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia)
2015-04-24
The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.
Wenbo Wu; Jiahong Liang; Xinyu Yao; Baohong Liu
2014-01-01
This paper addresses the problem of task allocation in real-time distributed systems with the goal of maximizing the system reliability, which has been shown to be NP-hard. We take account of the deadline constraint to formulate this problem and then propose an algorithm called chaotic adaptive simulated annealing (XASA) to solve the problem. Firstly, XASA begins with chaotic optimization which takes a chaotic walk in the solution space and generates several local minima; secondly XASA improv...
Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems
Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad
2014-04-01
This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.
Ranking important nodes in complex networks by simulated annealing
Sun, Yu; Yao, Pei-Yang; Wan, Lu-Jun; Shen, Jian; Zhong, Yun
2017-02-01
In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. Project supported by the National Natural Science Foundation of China (Grant No. 61573017) and the Natural Science Foundation of Shaanxi Province, China (Grant No. 2016JQ6062).
Simulated Annealing for the 0/1 Multidimensional Knapsack Problem
Institute of Scientific and Technical Information of China (English)
Fubin Qian; Rui Ding
2007-01-01
In this paper a simulated annealing (SA) algorithm is presented for the 0/1 multidimensional knapsack problem. Problem-specific knowledge is incorporated in the algorithm description and evaluation of parameters in order to look into the performance of finite-time implementations of SA. Computational results show that SA performs much better than a genetic algorithm in terms of solution time, whilst having a modest loss of solution quality.
Solving geometric constraints with genetic simulated annealing algorithm
Institute of Scientific and Technical Information of China (English)
刘生礼; 唐敏; 董金祥
2003-01-01
This paper applies genetic simulated annealing algorithm (SAGA) to solving geometric constraint problems. This method makes full use of the advantages of SAGA and can handle under-/over- constraint problems naturally. It has advantages (due to its not being sensitive to the initial values) over the Newton-Raphson method, and its yielding of multiple solutions, is an advantage over other optimal methods for multi-solution constraint system. Our experiments have proved the robustness and efficiency of this method.
Rayleigh wave inversion using heat-bath simulated annealing algorithm
Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng
2016-11-01
The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Simulated annealing spectral clustering algorithm for image segmentation
Institute of Scientific and Technical Information of China (English)
Yifang Yang; and Yuping Wang
2014-01-01
The similarity measure is crucial to the performance of spectral clustering. The Gaussian kernel function based on the Euclidean distance is usual y adopted as the similarity mea-sure. However, the Euclidean distance measure cannot ful y reveal the complex distribution data, and the result of spectral clustering is very sensitive to the scaling parameter. To solve these problems, a new manifold distance measure and a novel simulated anneal-ing spectral clustering (SASC) algorithm based on the manifold distance measure are proposed. The simulated annealing based on genetic algorithm (SAGA), characterized by its rapid conver-gence to the global optimum, is used to cluster the sample points in the spectral mapping space. The proposed algorithm can not only reflect local and global consistency better, but also reduce the sensitivity of spectral clustering to the kernel parameter, which improves the algorithm’s clustering performance. To efficiently ap-ply the algorithm to image segmentation, the Nystr¨om method is used to reduce the computation complexity. Experimental re-sults show that compared with traditional clustering algorithms and those popular spectral clustering algorithms, the proposed algorithm can achieve better clustering performances on several synthetic datasets, texture images and real images.
Directory of Open Access Journals (Sweden)
Cheng-Ming Lee
2016-11-01
Full Text Available A reinforcement learning algorithm is proposed to improve the accuracy of short-term load forecasting (STLF in this article. The proposed model integrates radial basis function neural network (RBFNN, support vector regression (SVR, and adaptive annealing learning algorithm (AALA. In the proposed methodology, firstly, the initial structure of RBFNN is determined by using an SVR. Then, an AALA with time-varying learning rates is used to optimize the initial parameters of SVR-RBFNN (AALA-SVR-RBFNN. In order to overcome the stagnation for searching optimal RBFNN, a particle swarm optimization (PSO is applied to simultaneously find promising learning rates in AALA. Finally, the short-term load demands are predicted by using the optimal RBFNN. The performance of the proposed methodology is verified on the actual load dataset from the Taiwan Power Company (TPC. Simulation results reveal that the proposed AALA-SVR-RBFNN can achieve a better load forecasting precision compared to various RBFNNs.
Simulated annealing approach to the max cut problem
Sen, Sandip
1993-03-01
In this paper we address the problem of partitioning the nodes of a random graph into two sets, so as to maximize the sum of the weights on the edges connecting nodes belonging to different sets. This problem has important real-life counterparts, but has been proven to be NP-complete. As such, a number of heuristic solution techniques have been proposed in literature to address this problem. We propose a stochastic optimization technique, simulated annealing, to find solutions for the max cut problem. Our experiments verify that good solutions to the problem can be found using this algorithm in a reasonable amount of time.
Stochastic annealing simulations of defect interactions among subcascades
Energy Technology Data Exchange (ETDEWEB)
Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N.
1997-04-01
The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.
Sparse approximation problem: how rapid simulated annealing succeeds and fails
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Simulated annealing technique to design minimum cost exchanger
Directory of Open Access Journals (Sweden)
Khalfe Nadeem M.
2011-01-01
Full Text Available Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change the design and geometric parameters to satisfy a given heat duty and constraints. Although well proven, this kind of approach is time consuming and may not lead to cost effective design as no cost criteria are explicitly accounted for. The present study explores the use of nontraditional optimization technique: called simulated annealing (SA, for design optimization of shell and tube heat exchangers from economic point of view. The optimization procedure involves the selection of the major geometric parameters such as tube diameters, tube length, baffle spacing, number of tube passes, tube layout, type of head, baffle cut etc and minimization of total annual cost is considered as design target. The presented simulated annealing technique is simple in concept, few in parameters and easy for implementations. Furthermore, the SA algorithm explores the good quality solutions quickly, giving the designer more degrees of freedom in the final choice with respect to traditional methods. The methodology takes into account the geometric and operational constraints typically recommended by design codes. Three different case studies are presented to demonstrate the effectiveness and accuracy of proposed algorithm. The SA approach is able to reduce the total cost of heat exchanger as compare to cost obtained by previously reported GA approach.
Simulated Annealing-Based Krill Herd Algorithm for Global Optimization
Directory of Open Access Journals (Sweden)
Gai-Ge Wang
2013-01-01
Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
Morgan, John A
2016-01-01
The method of simulated annealing is adapted to the temperature-emissivity separation (TES) problem. A patch of surface at the bottom of the atmosphere is assumed to be a greybody emitter with spectral emissivity $\\epsilon(k)$ describable by a mixture of spectral endmembers. We prove that a simulated annealing search conducted according to a suitable schedule converges to a solution maximizing the $\\textit{A-Posteriori}$ probability that spectral radiance detected at the top of the atmosphere originates from a patch with stipulated $T$ and $\\epsilon(k)$. Any such solution will be nonunique. The average of a large number of simulated annealing solutions, however, converges almost surely to a unique Maximum A-Posteriori solution for $T$ and $\\epsilon(k)$. The limitation to a stipulated set of endmember emissivities may be relaxed by allowing the number of endmembers to grow without bound, and to be generic continuous functions of wavenumber with bounded first derivatives with respect to wavenumber.
spsann - optimization of sample patterns using spatial simulated annealing
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Memoryless cooperative graph search based on the simulated annealing algorithm
Institute of Scientific and Technical Information of China (English)
Hou Jian; Yan Gang-Feng; Fan Zhen
2011-01-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Simulated annealing and joint manufacturing batch-sizing
Directory of Open Access Journals (Sweden)
Sarker Ruhul
2003-01-01
Full Text Available We address an important problem of a manufacturing system. The system procures raw materials from outside suppliers in a lot and processes them to produce finished goods. It proposes an ordering policy for raw materials to meet the requirements of a production facility. In return, this facility has to deliver finished products demanded by external buyers at fixed time intervals. First, a general cost model is developed considering both raw materials and finished products. Then this model is used to develop a simulated annealing approach to determining an optimal ordering policy for procurement of raw materials and also for the manufacturing batch size to minimize the total cost for meeting customer demands in time. The solutions obtained were compared with those of traditional approaches. Numerical examples are presented. .
Restoration of polarimetric SAR images using simulated annealing
DEFF Research Database (Denmark)
Schou, Jesper; Skriver, Henning
2001-01-01
approach favoring one of the objectives. An algorithm for estimating the radar cross-section (RCS) for intensity SAR images has previously been proposed in the literature based on Markov random fields and the stochastic optimization method simulated annealing. A new version of the algorithm is presented...... are obtained while at the same time preserving most of the structures in the image. The algorithm is evaluated using multilook polarimetric L-band data from the Danish airborne EMISAR system, and the impact of the algorithm on the unsupervised H-α classification is demonstrated......Filtering synthetic aperture radar (SAR) images ideally results in better estimates of the parameters characterizing the distributed targets in the images while preserving the structures of the nondistributed targets. However, these objectives are normally conflicting, often leading to a filtering...
Optimization of multiple-layer microperforated panels by simulated annealing
DEFF Research Database (Denmark)
Ruiz Villamil, Heidi; Cobo, Pedro; Jacobsen, Finn
2011-01-01
Sound absorption by microperforated panels (MPP) has received increasing attention the past years as an alternative to conventional porous absorbers in applications with special cleanliness and health requirements. The absorption curve of an MPP depends on four parameters: the holes diameter......, the panel thickness, the perforation ratio, and the thickness of the air cavity between the panel and an impervious wall. It is possible to find a proper combination of these parameters that provides an MPP absorbing in one octave band or two, within the frequency range of interest for noise control....... Therefore, simulated annealing is proposed in this paper as a tool to solve the optimization problem of finding the best combination of the constitutive parameters of an ML-MPP providing the maximum average absorption within a prescribed frequency band....
A Scheduling Algorithm Based on Petri Nets and Simulated Annealing
Directory of Open Access Journals (Sweden)
Rachida H. Ghoul
2007-01-01
Full Text Available This study aims at presenting a hybrid Flexible Manufacturing System "HFMS" short-term scheduling problem. Based on the art state of general scheduling algorithms, we present the meta-heuristic, we have decided to apply for a given example of HFMS. That was the study of Simulated Annealing Algorithm SA. The HFMS model based on hierarchical Petri nets, was used to represent static and dynamic behavior of the HFMS and design scheduling solutions. Hierarchical Petri nets model was regarded as being made up a set of single timed colored Petri nets models. Each single model represents one process which was composed of many operations and tasks. The complex scheduling problem was decomposed in simple sub-problems. Scheduling algorithm was applied on each sub model in order to resolve conflicts on shared production resources.
Fuzzy unit commitment solution - A novel twofold simulated annealing approach
Energy Technology Data Exchange (ETDEWEB)
Saber, Ahmed Yousuf; Senjyu, Tomonobu; Yona, Atsushi; Urasaki, Naomitsu [Faculty of Engineering, University of the Ryukyus, 1 Senbaru, Nishihara-cho Nakagami, Okinawa 903-0213 (Japan); Funabashi, Toshihisa [Meidensha Corporation, Riverside Building 36-2, Tokyo 103-8515 (Japan)
2007-10-15
The authors propose a twofold simulated annealing (twofold-SA) method for the optimization of fuzzy unit commitment formulation in this paper. In the proposed method, simulated annealing (SA) and fuzzy logic are combined to obtain SA acceptance probabilities from fuzzy membership degrees. Fuzzy load is calculated from error statistics and an initial solution is generated by a priority list method. The initial solution is decomposed into hourly-schedules and each hourly-schedule is modified by decomposed-SA using a bit flipping operator. Fuzzy membership degrees are the selection attributes of the decomposed-SA. A new solution consists of these hourly-schedules of entire scheduling period after repair, as unit-wise constraints may not be fulfilled at the time of an individual hourly-schedule modification. This helps to detect and modify promising schedules of appropriate hours. In coupling-SA, this new solution is accepted for the next iteration if its cost is less than that of current solution. However, a higher cost new solution is accepted with the temperature dependent total cost membership function. Computation time of the proposed method is also improved by the imprecise tolerance of the fuzzy model. Besides, excess units with the system dependent probability distribution help to handle constraints efficiently and imprecise economic load dispatch (ELD) calculations are modified to save the execution time. The proposed method is tested using standard reported data sets. Numerical results show an improvement in solution cost and time compared to the results obtained from other existing methods. (author)
Directory of Open Access Journals (Sweden)
Wenbo Wu
2014-01-01
Full Text Available This paper addresses the problem of task allocation in real-time distributed systems with the goal of maximizing the system reliability, which has been shown to be NP-hard. We take account of the deadline constraint to formulate this problem and then propose an algorithm called chaotic adaptive simulated annealing (XASA to solve the problem. Firstly, XASA begins with chaotic optimization which takes a chaotic walk in the solution space and generates several local minima; secondly XASA improves SA algorithm via several adaptive schemes and continues to search the optimal based on the results of chaotic optimization. The effectiveness of XASA is evaluated by comparing with traditional SA algorithm and improved SA algorithm. The results show that XASA can achieve a satisfactory performance of speedup without loss of solution quality.
Enhanced Simulated Annealing for Solving Aggregate Production Planning
Directory of Open Access Journals (Sweden)
Mohd Rizam Abu Bakar
2016-01-01
Full Text Available Simulated annealing (SA has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS, the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy.
Directory of Open Access Journals (Sweden)
I Gede Agus Widyadana
2002-01-01
Full Text Available The research is focused on comparing Genetics algorithm and Simulated Annealing in the term of performa and processing time. The main purpose is to find out performance both of the algorithm to solve minimizing makespan and total flowtime in a particular flowshop system. Performances of the algorithms are found by simulating problems with variation of jobs and machines combination. The result show the Simulated Annealing is much better than the Genetics up to 90%. The Genetics, however, only had score in processing time, but the trend that plotted suggest that in problems with lots of jobs and lots of machines, the Simulated Annealing will run much faster than the Genetics. Abstract in Bahasa Indonesia : Penelitian ini difokuskan pada pembandingan algoritma Genetika dan Simulated Annealing ditinjau dari aspek performa dan waktu proses. Tujuannya adalah untuk melihat kemampuan dua algoritma tersebut untuk menyelesaikan problem-problem penjadwalan flow shop dengan kriteria minimasi makespan dan total flowtime. Kemampuan kedua algoritma tersebut dilihat dengan melakukan simulasi yang dilakukan pada kombinasi-kombinasi job dan mesin yang berbeda-beda. Hasil simulasi menunjukan algoritma Simulated Annealing lebih unggul dari algoritma Genetika hingga 90%, algoritma Genetika hanya unggul pada waktu proses saja, namun dengan tren waktu proses yang terbentuk, diyakini pada problem dengan kombinasi job dan mesin yang banyak, algoritma Simulated Annealing dapat lebih cepat daripada algoritma Genetika. Kata kunci: Algoritma Genetika, Simulated Annealing, flow shop, makespan, total flowtime.
Static Security Enhancement and Loss Minimization Using Simulated Annealing
Directory of Open Access Journals (Sweden)
A.Y. Abdelaziz
2013-03-01
Full Text Available This paper presents a developed algorithm for optimal placement of thyristor controlled series capacitors (TCSC’s for enhancing the power system static security and minimizing the system overall power loss. Placing TCSC’s at selected branches requires analysis of the system behavior under all possible contingencies. A selective procedure to determine the locations and settings of the thyristor controlled series capacitors is presented. The locations are determined by evaluating contingency sensitivity index (CSI for a given power system branch for a given number of contingencies. This criterion is then used to develop branches prioritizing index in order to rank the system branches possible for placement of the thyristor controlled series capacitors. Optimal settings of TCSC’s are determined by the optimization technique of simulated annealing (SA, where settings are chosen to minimize the overall power system losses. The goal of the developed methodology is to enhance power system static security by alleviating/eliminating overloads on the transmission lines and maintaining the voltages at all load buses within their specified limits through the optimal placement and setting of TCSC’s under single and double line outage network contingencies. The proposed algorithm is examined using different IEEE standard test systems to shown its superiority in enhancing the system static security and minimizing the system losses.
Simulated Annealing Technique for Routing in a Rectangular Mesh Network
Directory of Open Access Journals (Sweden)
Noraziah Adzhar
2014-01-01
Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.
Optical Design of Multilayer Achromatic Waveplate by Simulated Annealing Algorithm
Institute of Scientific and Technical Information of China (English)
Jun Ma; Jing-Shan Wang; Carsten Denker; Hai-Min Wang
2008-01-01
We applied a Monte Carlo method-simulated annealing algorithm-to carry out the design of multilayer achromatic waveplate. We present solutions for three-, six-and ten-layer achromatic waveplates. The optimized retardance settings are found to be 89°51'39"±0°33'37" and 89°54'46"±0°22'4" for the six-and ten-layer waveplates, respectively, for a wavelength range from 1000nm to 1800nm. The polarimetric properties of multilayer waveplates are investigated based on several numerical experiments. In contrast to previously proposed three-layer achromatic waveplate, the fast axes of the new six-and ten-layer achromatic waveplate remain at fixed angles, independent of the wavelength. Two applications of multilayer achromatic waveplate are discussed, the general-purpose phase shifter and the birefringent filter in the Infrared Imaging Magnetograph (IRIM) system of the Big Bear Solar Observatory (BBSO). We also checked an experimental method to measure the retardance of waveplates.
Traveling Salesman Approach for Solving Petrol Distribution Using Simulated Annealing
Directory of Open Access Journals (Sweden)
Zuhaimy Ismail
2008-01-01
Full Text Available This research presents an attempt to solve a logistic company's problem of delivering petrol to petrol station in the state of Johor. This delivery system is formulated as a travelling salesman problem (TSP. TSP involves finding an optimal route for visiting stations and returning to point of origin, where the inter-station distance is symmetric and known. This real world application is a deceptive simple combinatorial problem and our approach is to develop solutions based on the idea of local search and meta-heuristics. As a standard problem, we have chosen a solution is a deceptively simple combinatorial problem and we defined it simply as the time spends or distance travelled by salesman visiting n cities (or nodes cyclically. In one tour the vehicle visits each station just once and finishes up where he started. As standard problems, we have chosen TSP with different stations visited once. This research presents the development of solution engine based on local search method known as Greedy Method and with the result generated as the initial solution, Simulated Annealing (SA and Tabu Search (TS further used to improve the search and provide the best solution. A user friendly optimization program developed using Microsoft C++ to solve the TSP and provides solutions to future TSP which may be classified into daily or advanced management and engineering problems.
Hybrid annealing using a quantum simulator coupled to a classical computer
Graß, Tobias
2016-01-01
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Simulated annealing is a computational technique which explores the configuration space by mimicking thermal noise. By slow cooling, it freezes the system in a low-energy configuration, but the algorithm often gets stuck in local minima. In quantum annealing, the thermal noise is replaced by controllable quantum fluctuations, and the technique can be implemented in modern quantum simulators. However, quantum-adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configurati...
Sensitivity study on hydraulic well testing inversion using simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.
Differential evolution and simulated annealing algorithms for mechanical systems design
Directory of Open Access Journals (Sweden)
H. Saruhan
2014-09-01
Full Text Available In this study, nature inspired algorithms – the Differential Evolution (DE and the Simulated Annealing (SA – are utilized to seek a global optimum solution for ball bearings link system assembly weight with constraints and mixed design variables. The Genetic Algorithm (GA and the Evolution Strategy (ES will be a reference for the examination and validation of the DE and the SA. The main purpose is to minimize the weight of an assembly system composed of a shaft and two ball bearings. Ball bearings link system is used extensively in many machinery applications. Among mechanical systems, designers pay great attention to the ball bearings link system because of its significant industrial importance. The problem is complex and a time consuming process due to mixed design variables and inequality constraints imposed on the objective function. The results showed that the DE and the SA performed and obtained convergence reliability on the global optimum solution. So the contribution of the DE and the SA application to the mechanical system design can be very useful in many real-world mechanical system design problems. Beside, the comparison confirms the effectiveness and the superiority of the DE over the others algorithms – the SA, the GA, and the ES – in terms of solution quality. The ball bearings link system assembly weight of 634,099 gr was obtained using the DE while 671,616 gr, 728213.8 gr, and 729445.5 gr were obtained using the SA, the ES, and the GA respectively.
Wanneng Shu
2009-01-01
Quantum-inspired genetic algorithm (QGA) is applied to simulated annealing (SA) to develop a class of quantum-inspired simulated annealing genetic algorithm (QSAGA) for combinatorial optimization. With the condition of preserving QGA advantages, QSAGA takes advantage of the SA algorithm so as to avoid premature convergence. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem. The results show that QSAGA performs well, without premature conve...
Directory of Open Access Journals (Sweden)
Gregorius Satia Budhi
2003-01-01
Full Text Available Flexible Manufacturing System (FMS is a manufacturing system that is formed from several Numerical Controlled Machines combine with material handling system, so that different jobs can be worked by different machines sequences. FMS combine the high productivity and flexibility of Transfer Line and Job Shop manufacturing system. In this reasearch, Activity-Based Costing(ABC approach was used as the weight to search the operation route in the proper machine, so that the total production cost can be optimized. The search method that was used in this experiment is Simulated Annealling, a variant form Hill Climbing Search method. An ideal operation time to proses a part was used as the annealling schedule. From the empirical test, it could be proved that the use of ABC approach and Simulated Annealing to search the route (routing process can optimize the Total Production Cost. In the other hand, the use of ideal operation time to process a part as annealing schedule can control the processing time well. Abstract in Bahasa Indonesia : Flexible Manufacturing System (FMS adalah sistem manufaktur yang tersusun dari mesin-mesin Numerical Control (NC yang dikombinasi dengan Sistem Penanganan Material, sehingga job-job berbeda dikerjakan oleh mesin-mesin dengan alur yang berlainan. FMS menggabungkan produktifitas dan fleksibilitas yang tinggi dari Sistem Manufaktur Transfer Line dan Job Shop. Pada riset ini pendekatan Activity-Based Costing (ABC digunakan sebagai bobot / weight dalam pencarian rute operasi pada mesin yang tepat, untuk lebih mengoptimasi biaya produksi secara keseluruhan. Adapun metode Searching yang digunakan adalah Simulated Annealing yang merupakan varian dari metode searching Hill Climbing. Waktu operasi ideal untuk memproses sebuah part digunakan sebagai Annealing Schedulenya. Dari hasil pengujian empiris dapat dibuktikan bahwa penggunaan pendekatan ABC dan Simulated Annealing untuk proses pencarian rute (routing dapat lebih
Computer simulation of laser annealing of a nanostructured surface
Ivanov, D.; Marinov, I.; Gorbachev, Y.; Smirnov, A.; Krzhizhanovskaya, V.
2010-01-01
Laser annealing technology is used in mass production of new-generation semiconductor materials and nano-electronic devices like the MOS-based (metal-oxide-semiconductor) integrated circuits. Manufacturing sub-100 nm MOS devices demands application of ultra-shallow doping (junctions), which requires
Theodorakos, I.; Zergioti, I.; Vamvakas, V.; Tsoukalas, D.; Raptis, Y. S.
2014-01-01
In this work, a picosecond diode pumped solid state laser and a nanosecond Nd:YAG laser have been used for the annealing and the partial nano-crystallization of an amorphous silicon layer. These experiments were conducted as an alternative/complementary to plasma-enhanced chemical vapor deposition method for fabrication of micromorph tandem solar cell. The laser experimental work was combined with simulations of the annealing process, in terms of temperature distribution evolution, in order to predetermine the optimum annealing conditions. The annealed material was studied, as a function of several annealing parameters (wavelength, pulse duration, fluence), as far as it concerns its structural properties, by X-ray diffraction, SEM, and micro-Raman techniques.
Green, P. L.
2015-02-01
This work details the Bayesian identification of a nonlinear dynamical system using a novel MCMC algorithm: 'Data Annealing'. Data Annealing is similar to Simulated Annealing in that it allows the Markov chain to easily clear 'local traps' in the target distribution. To achieve this, training data is fed into the likelihood such that its influence over the posterior is introduced gradually - this allows the annealing procedure to be conducted with reduced computational expense. Additionally, Data Annealing uses a proposal distribution which allows it to conduct a local search accompanied by occasional long jumps, reducing the chance that it will become stuck in local traps. Here it is used to identify an experimental nonlinear system. The resulting Markov chains are used to approximate the covariance matrices of the parameters in a set of competing models before the issue of model selection is tackled using the Deviance Information Criterion.
Speagle, Joshua S; Eisenstein, Daniel J; Masters, Daniel C; Steinhardt, Charles L
2015-01-01
Using a grid of $\\sim 2$ million elements ($\\Delta z = 0.005$) adapted from COSMOS photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound rapid but "greedy" optimization schemes, even with additional stochastic sampling methods. In order to robustly and efficiently explore these surfaces, we develop BAD-Z [Brisk Annealing-Driven Redshifts (Z)], which combines ensemble Markov Chain Monte Carlo (MCMC) sampling with simulated annealing to sample arbitrarily large, pre-generated grids in approximately constant time. Using a mock catalog of 384,662 objects, we show BAD-Z samples $\\sim 40$ times more efficiently compared to a brute-force counterpart while maintaining similar levels of accuracy. Our results represent first steps toward designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation...
Simulation of annealing process effect on texture evolution of deep-drawing sheet St15
Institute of Scientific and Technical Information of China (English)
Jinghong Sun; Yazheng Liu; Leyu Zhou
2005-01-01
A two-dimensional cellular automaton method was used to simulate grain growth during the recrystallization annealing of deep-drawing sheet Stl5, taking the simulated result of recrystallization and the experimental result of the annealing texture of deepdrawing sheet St15 as the initial condition and reference. By means of computer simulation, the microstructures and textures of different periods of grain growth were predicted. It is achieved that the grain size, shape and texture become stable after the grain growth at a constant temperature of 700℃ for 10 h, and the advantaged texture components { 111 } and { 111 } are dominant.
Liang, Faming
2014-04-03
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.
Simulated annealing algorithm for TSP%用模拟退火算法求解TSP
Institute of Scientific and Technical Information of China (English)
朱静丽
2011-01-01
货郎担问题，即TSP（Traveling Salesman Problem），是一个组合优化问题。具有NPC计算复杂性。本文分析了模拟退火算法模型，研究了用模拟退火算法求解TSP算法的可行性，并给出了用模拟退火算法求解TSP问题的具体实现方法。%Traveling salesman problem,that TSP（Travelling Salesman Problem）,is a combinatorial optimization problem.Computational complexity with the NPC.This paper analyzes the simulated annealing algorithm model to study the simulated annealing algorithm for TSP of the algorithm,and gives the simulated annealing algorithm for TSP on the specific implementation.
Cook, Darcy; Ferens, Ken; Kinsner, Witold
Simulated Annealing (SA) has shown to be a successful technique in optimization problems. It has been applied to both continuous function optimization problems, and combinatorial optimization problems. There has been some work in modifying the SA algorithm to apply properties of chaotic processes with the goal of reducing the time to converge to an optimal or a good solution. There are several variations of these chaotic simulated annealing (CSA) algorithms. In this paper a new variation of chaotic simulated annealing is proposed and is applied in solving a combinatorial optimization problem in multiprocessor task allocation. The experiments show the CSA algorithms reach a good solution faster than traditional SA algorithms in many cases because of a wider initial solution search.
Institute of Scientific and Technical Information of China (English)
CHUShuchuan; JohnF.Roddick
2003-01-01
In this paper, a cluster generation algorithm for vector quantization using a tabu search approach with simulated annealing is proposed. The main iclea of this algorithm is to use the tabu search approach to gen-erate non-local moves for the clusters and apply the sim-ulated annealing technique to select the current best solu-tion, thus improving the cluster generation and reducing the mean squared error. Preliminary experimental results demonstrate that the proposed approach is superior to the tabu search approach with Generalised Lloyd algorithm.
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Institute of Scientific and Technical Information of China (English)
高红民; 周惠; 徐立中; 石爱业
2014-01-01
A hybrid feature selection and classification strategy was proposed based on the simulated annealing genetic algorithm and multiple instance learning (MIL). The band selection method was proposed from subspace decomposition, which combines the simulated annealing algorithm with the genetic algorithm in choosing different cross-over and mutation probabilities, as well as mutation individuals. Then MIL was combined with image segmentation, clustering and support vector machine algorithms to classify hyperspectral image. The experimental results show that this proposed method can get high classification accuracy of 93.13%at small training samples and the weaknesses of the conventional methods are overcome.
DEFF Research Database (Denmark)
Sousa, Tiago M; Soares, Tiago; Morais, Hugo;
2016-01-01
The massive use of distributed generation and electric vehicles will lead to a more complex management of the power system, requiring new approaches to be used in the optimal resource scheduling field. Electric vehicles with vehicle-to-grid capability can be useful for the aggregator players...... of the aggregator total operation costs. The case study considers a distribution network with 33-bus, 66 distributed generation and 2000 electric vehicles. The proposed simulated annealing is matched with a deterministic approach allowing an effective and efficient comparison. The simulated annealing presents...
DEFF Research Database (Denmark)
Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup
2011-01-01
The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology...... of the networks. There have been many use of evolutionary algorithms to solve the problems which are in combinatory complexity nature, and extremely hard to solve by exact approaches. Both Genetic and Simulated annealing algorithms are similar in using controlled stochastic method to search the solution...
Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Directory of Open Access Journals (Sweden)
Juan Frausto-Solis
2016-01-01
Full Text Available A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP instances. This new approach has four phases: (i Multiquenching Phase (MQP, (ii Boltzmann Annealing Phase (BAP, (iii Bose-Einstein Annealing Phase (BEAP, and (iv Dynamical Equilibrium Phase (DEP. BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Optimal design of hydraulic manifold blocks based on niching genetic simulated annealing algorithm
Institute of Scientific and Technical Information of China (English)
Jia Chunqiang; Yu Ling; Tian Shujun; Gao Yanming
2007-01-01
To solve the combinatorial optimization problem of outer layout and inner connection integrated schemes in the design of hydraulic manifold blocks(HMB),a hybrid genetic simulated annealing algorithm based on niche technology is presented.This hybrid algorithm,which combines genetic algorithm,simulated annealing algorithm and niche technology,has a strong capability in global and local search,and all extrema can be found in a short time without strict requests for preferences.For the complex restricted solid spatial layout problems in HMB,the optimizing mathematical model is presented.The key technologies in the integrated layout and connection design of HMB,including the realization of coding,annealing operation and genetic operation,are discussed.The framework of HMB optimal design system based on hybrid optimization strategy is proposed.An example is given to testify the effectiveness and feasibility of the algorithm.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
Stochastic annealing simulation of copper under neutron irradiation
Energy Technology Data Exchange (ETDEWEB)
Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N. [Risoe National Lab., Roskilde (Denmark)
1998-03-01
This report is a summary of a presentation made at ICFRM-8 on computer simulations of defect accumulation during irradiation of copper to low doses at room temperature. The simulation results are in good agreement with experimental data on defect cluster densities in copper irradiated in RTNS-II.
Using genetic/simulated annealing algorithm to solve disassembly sequence planning
Institute of Scientific and Technical Information of China (English)
Wu Hao; Zuo Hongfu
2009-01-01
disassembly sequence.And the solution methodology based on the genetic/simulated annealing algorithm with binary-tree algorithm is given.Finally,an example is analyzed in detail,and the result shows that the model is correct and efficient.
Directory of Open Access Journals (Sweden)
Thamilselvan Rakkiannan
2012-01-01
Full Text Available Problem statement: The Job Shop Scheduling Problem (JSSP is observed as one of the most difficult NP-hard, combinatorial problem. The problem consists of determining the most efficient schedule for jobs that are processed on several machines. Approach: In this study Genetic Algorithm (GA is integrated with the parallel version of Simulated Annealing Algorithm (SA is applied to the job shop scheduling problem. The proposed algorithm is implemented in a distributed environment using Remote Method Invocation concept. The new genetic operator and a parallel simulated annealing algorithm are developed for solving job shop scheduling. Results: The implementation is done successfully to examine the convergence and effectiveness of the proposed hybrid algorithm. The JSS problems tested with very well-known benchmark problems, which are considered to measure the quality of proposed system. Conclusion/Recommendations: The empirical results show that the proposed genetic algorithm with simulated annealing is quite successful to achieve better solution than the individual genetic or simulated annealing algorithm."
Binocular adaptive optics visual simulator.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2009-09-01
A binocular adaptive optics visual simulator is presented. The instrument allows for measuring and manipulating ocular aberrations of the two eyes simultaneously, while the subject performs visual testing under binocular vision. An important feature of the apparatus consists on the use of a single correcting device and wavefront sensor. Aberrations are controlled by means of a liquid-crystal-on-silicon spatial light modulator, where the two pupils of the subject are projected. Aberrations from the two eyes are measured with a single Hartmann-Shack sensor. As an example of the potential of the apparatus for the study of the impact of the eye's aberrations on binocular vision, results of contrast sensitivity after addition of spherical aberration are presented for one subject. Different binocular combinations of spherical aberration were explored. Results suggest complex binocular interactions in the presence of monochromatic aberrations. The technique and the instrument might contribute to the better understanding of binocular vision and to the search for optimized ophthalmic corrections.
Synthesis of optimal digital shapers with arbitrary noise using simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Regadío, Alberto, E-mail: aregadio@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain); Sánchez-Prieto, Sebastián, E-mail: sebastian.sanchez@uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Tabero, Jesús, E-mail: taberogj@inta.es [Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain)
2014-02-21
This paper presents the structure, design and implementation of a new way of determining the optimal shaping in time-domain for spectrometers by means of simulated annealing. The proposed algorithm is able to adjust automatically and in real-time the coefficients for shaping an input signal. A practical prototype was designed, implemented and tested on a PowerPC 405 embedded in a Field Programmable Gate Array (FPGA). Lastly, its performance and capabilities were measured using simulations and a neutron monitor.
Institute of Scientific and Technical Information of China (English)
Wang Hongkai; Guan Yanyong; Xue Peijun
2008-01-01
In rough communication, because each agent has a different language and cannot provide precise communication to each other, the concept translated among multi-agents will loss some information and this results in a less or rougher concept. With different translation sequences, the problem of information loss is varied. To get the translation sequence, in which the jth agent taking part in rough communication gets maximum information, a simulated annealing algorithm is used. Analysis and simulation of this algorithm demonstrate its effectiveness.
Directory of Open Access Journals (Sweden)
Destya Arisetyanti
2012-09-01
Full Text Available Standar Digital Video Broadcasting Terrestrial (DVB-T diimplementasikan pada konfigurasi Single Frequency Network (SFN dimana seluruh pemancar pada sebuah jaringan beroperasi pada kanal frekuensi yang sama dan ditransmisikan pada waktu yang sama. SFN lebih dipilih daripada sistem pendahulunya yaitu Multi Frequency Network (MFN karena menggunakan frekuensi yang lebih efisien serta jangkauan area cakupan yang lebih luas. Pada sisi penerima memungkinkan adanya skenario multipath dengan menggabungkan sinyal dari pemancar yang berbeda karena konfigurasi SFN ini berbasis Orthogonal Frequency Division Multiplexing (OFDM. Pada penelitian ini, data ketinggian dan jumlah gedung melalui model prediksi propagasi free space dan knife edge akan diterapkan untuk memperkirakan nilai daya terima dan delay sinyal. Perhitungan nilai carrier (C dan carrier to interference (C/I dilakukan untuk mengetahui kualitas sinyal pada sisi penerima. Selanjutnya, optimasi parameter lokasi pemancar diterapkan oleh algoritma Simulated Annealing dengan menggunakan tiga cooling schedule terbaik. Simulated Annealing merupakan algoritma optimasi berdasarkan sistem termodinamika yang mensimulasikan proses annealing. Simulated Annealing telah berhasil memperluas daerah cakupan SFN. Hal ini dibuktikan dengan berkurangnya sebagian besar titik receiver dengan kualitas sinyal dibawah threshold.
The Adaptive Multi-scale Simulation Infrastructure
Energy Technology Data Exchange (ETDEWEB)
Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)
2015-09-01
The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.
Directory of Open Access Journals (Sweden)
Chang Li
2014-01-01
Full Text Available Much of the previous work in D-optimal design for regression models with correlated errors focused on polynomial models with a single predictor variable, in large part because of the intractability of an analytic solution. In this paper, we present a modified, improved simulated annealing algorithm, providing practical approaches to specifications of the annealing cooling parameters, thresholds, and search neighborhoods for the perturbation scheme, which finds approximate D-optimal designs for 2-way and 3-way polynomial regression for a variety of specific correlation structures with a given correlation coefficient. Results in each correlated-errors case are compared with traditional simulated annealing algorithm, that is, the SA algorithm without our improvement. Our improved simulated annealing results had generally higher D-efficiency than traditional simulated annealing algorithm, especially when the correlation parameter was well away from 0.
Ohzeki, Masayuki
2017-01-01
Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians. PMID:28112244
Directory of Open Access Journals (Sweden)
Sheng Lu
2015-01-01
Full Text Available To solve the problem of parameter selection during the design of magnetically coupled resonant wireless power transmission system (MCR-WPT, this paper proposed an improved genetic simulated annealing algorithm. Firstly, the equivalent circuit of the system is analysis in this study and a nonlinear programming mathematical model is built. Secondly, in place of the penalty function method in the genetic algorithm, the selection strategy based on the distance between individuals is adopted to select individual. In this way, it reduces the excess empirical parameters. Meanwhile, it can improve the convergence rate and the searching ability by calculating crossover probability and mutation probability according to the variance of population’s fitness. At last, the simulated annealing operator is added to increase local search ability of the method. The simulation shows that the improved method can break the limit of the local optimum solution and get the global optimum solution faster. The optimized system can achieve the practical requirements.
Optimal Lead-lag Controller for Distributed Generation Unit in Island Mode Using Simulated Annealing
Directory of Open Access Journals (Sweden)
A. Akbarimajd
2014-07-01
Full Text Available Active and reactive power components of a Distributed Generation (DG is normally controlled by a conventional dq-current control strategy however, after islanding the dq-current which is not able to successfully complete the control task is disabled and a lead-lag control strategy based optimized by simulated annealing is proposed for control of DG unit in islanding mode. Integral of Time multiply by Absolute Error (ITEA criterion is used as cost function of simulated annealing in order to achieve smooth response and robust behavior. The proposed controller improved robust stability margins of the system. Simulations with different load and input operating conditions verify advantages of the proposed controller in comparison with a previously developed classic controller in terms of robustness and response time.
Simulated annealing: an application in fine particle magnetism
Energy Technology Data Exchange (ETDEWEB)
Legeratos, A.; Chantrell, R.W.; Wohlfarth, E.P.
1985-07-01
Using a model of a system of interacting fine ferromagnetic particles, a computer simulation of the dynamical approach to local or global minima of the system is developed for two different schedules of the application of ac and dc magnetic fields. The process of optimization, i.e., the achievement of a global minimum, depends on the rate of reduction of the ac field and on the symmetry of the ac field cycles. The calculations carried out to illustrate these effects include remanence curves and the zero field remanence for both schedules under different conditions. The growth of the magnetization during these processes was studied, and the interaction energy was calculated to best illustrate the optimization.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Directory of Open Access Journals (Sweden)
Hayder Amer
2016-06-01
Full Text Available Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej
2016-04-01
Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing.
Directory of Open Access Journals (Sweden)
Ahmad Abubaker
Full Text Available This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, "MOPSOSA". The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO and the Multi-Objective Simulated Annealing (MOSA. Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing.
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, "MOPSOSA". The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets.
Optimización Global Simulated Annealing
Directory of Open Access Journals (Sweden)
Francisco Sánchez Mares
2006-01-01
Full Text Available El presente trabajo muestra la aplicación del método de optimización global Simulated Annealing (SA. Esta técnica ha sido aplicada en diversas áreas de la ingeniería como una estrategia robusta y versátil para calcular con éxito el mínimo global de una función o un sistema de funciones. Para probar la eficiencia del método se encontraron los mínimos globales de una función arbitraria y se evaluó el comportamiento numérico del Simulated Annealing durante la convergencia a las dos soluciones que presenta el caso de estudio.
Research on coal-mine gas monitoring system controlled by annealing simulating algorithm
Zhou, Mengran; Li, Zhenbi
2007-12-01
This paper introduces the principle and schematic diagram of gas monitoring system by means of infrared method. Annealing simulating algorithm is adopted to find the whole optimum solution and the Metroplis criterion is used to make iterative algorithm combination optimization by control parameter decreasing aiming at solving large-scale combination optimization problem. Experiment result obtained by the performing scheme of realizing algorithm training and flow of realizing algorithm training indicates that annealing simulating algorithm applied to identify gas is better than traditional linear local search method. It makes the algorithm iterate to the optimum value rapidly so that the quality of the solution is improved efficiently. The CPU time is shortened and the identifying rate of gas is increased. For the mines with much-gas gushing fatalness the regional danger and disaster advanced forecast can be realized. The reliability of coal-mine safety is improved.
Institute of Scientific and Technical Information of China (English)
吴剑锋; 朱学愚; 刘建立
1999-01-01
The genetic algorithm (GA) is a global and random search procedure based on the mechanics of natural selection and natural genetics. A new optimization method of the genetic algorithm-based simulated annealing penalty function (GASAPF) is presented to solve groundwater management model. Compared with the traditional gradient-based algorithms, the GA is straightforward and there is no need to calculate derivatives of the objective function. The GA is able to generate both convex and nonconvex points within the feasible region. It can be sure that the GA converges to the global or at least near-global optimal solution to handle the constraints by simulated annealing technique. Maximum pumping example results show that the GASAPF to solve optimization model is very efficient and robust.
Fast and accurate protein substructure searching with simulated annealing and GPUs
Directory of Open Access Journals (Sweden)
Stivala Alex D
2010-09-01
Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.
Institute of Scientific and Technical Information of China (English)
Jin Shi-Feng; Wang Wei-Min; Zhou Jian-Kun; Guo Hong-Xuan; J.F. Webb; Bian Xiu-Fang
2005-01-01
The nanocrystallization behaviour of Zr70Cu20Ni10 metallic glass during isothermal annealing is studied by employing a Monte Carlo simulation incorporating with a modified Ising model and a Q-state Potts model. Based on the simulated microstructure and differential scanning calorimetry curves, we find that the low crystal-amorphous interface energy of Ni plays an important role in the nanocrystallization of primary Zr2Ni. It is found that when T ＜ TImax (where TImax is the temperature with maximum nucleation rate), the increase of temperature results in a larger growth rate and a much finer microstructure for the primary Zr2Ni, which accords with the microstructure evolution in "flash annealing". Finally, the Zr2Ni/Zr2Cu interface energy σG contributes to the pinning effect of the primary nano-sized Zr2Ni grains in the later formed normal Zr2Cu grains.
Paul, Gerald
2010-01-01
For almost two decades the question of whether tabu search (TS) or simulated annealing (SA) performs better for the quadratic assignment problem has been unresolved. To answer this question satisfactorily, we compare performance at various values of targeted solution quality, running each heuristic at its optimal number of iterations for each target. We find that for a number of varied problem instances, SA performs better for higher quality targets while TS performs better for lower quality targets.
Paul, Gerald
2011-01-01
The quadratic assignment problem (QAP) is one of the most difficult combinatorial optimization problems. One of the most powerful and commonly used heuristics to obtain approximations to the optimal solution of the QAP is simulated annealing (SA). We present an efficient implementation of the SA heuristic which performs more than 100 times faster then existing implementations for large problem sizes and a large number of SA iterations.
A GPU implementation of the Simulated Annealing Heuristic for the Quadratic Assignment Problem
Paul, Gerald
2012-01-01
The quadratic assignment problem (QAP) is one of the most difficult combinatorial optimization problems. An effective heuristic for obtaining approximate solutions to the QAP is simulated annealing (SA). Here we describe an SA implementation for the QAP which runs on a graphics processing unit (GPU). GPUs are composed of low cost commodity graphics chips which in combination provide a powerful platform for general purpose parallel computing. For SA runs with large numbers of iterations, we fi...
Directory of Open Access Journals (Sweden)
Kohei Arai
2012-07-01
Full Text Available Method for geophysical parameter estimations with microwave radiometer data based on Simulated Annealing: SA is proposed. Geophysical parameters which are estimated with microwave radiometer data are closely related each other. Therefore simultaneous estimation makes constraints in accordance with the relations. On the other hand, SA requires huge computer resources for convergence. In order to accelerate convergence process, oscillated decreasing function is proposed for cool down function. Experimental results show that remarkable improvements are observed for geophysical parameter estimations.
Akbar, Akhmad Fanani; Nugraha, Andri Dian; Sule, Rachmat; Juanda, Aditya Abdurrahman
2013-09-01
Hypocenter determination of micro-earthquakes of Mount "X-1" geothermal field has been conducted using simulated annealing and guided error search method using a 1D seismic velocity model. In order to speed up the hypocenter determination process a three-circle intersection method has been used to guide the simulated annealing and guided error search process. We used P and S arrival time's microseismic data. In the simulated annealing and guided error search processes, the minimum travel time from a source to a receiver has been calculated by employing ray tracing with shooting method. The resulting hypocenters from the above process occurred at depths of 3-4 km below mean sea level. These hypocenter distributions are correlated with previous study which was concluded that the most active microseismic area in which the site of many fractures and also vertical circulation place. Later on, resulting hypocenters location was used as input to determine 1-D seismic velocity using joint hypocenter determination method. The results of VELEST indicate show low Vp/Vs ratio value at depths of 3-4 km. Our interpretation is this anomaly may be related to a rock layer which is saturated by vapor (gas or steam). Another feature is high Vp/Vs ratio value at depths of 1-3 km that may related to a rock layer which is saturated by fluid or partial melting. We also analyze the focal mechanism of microseismic using ISOLA method to determine the source characteristic of this event.
DEFF Research Database (Denmark)
Sousa, Tiago M; Morais, Hugo; Castro, R.
2014-01-01
to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach......An intensive use of dispersed energy resources is expected for future power systems, including distributed generation, especially based on renewable sources, and electric vehicles. The system operation methods and tool must be adapted to the increased complexity, especially the optimal resource...... scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...
Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.
2016-10-01
Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.
Erler, Axel; Wegmann, Susanne; Elie-Caille, Celine; Bradshaw, Charles Richard; Maresca, Marcello; Seidel, Ralf; Habermann, Bianca; Muller, Daniel J; Stewart, A Francis
2009-08-21
Single-strand annealing proteins, such as Redbeta from lambda phage or eukaryotic Rad52, play roles in homologous recombination. Here, we use atomic force microscopy to examine Redbeta quaternary structure and Redbeta-DNA complexes. In the absence of DNA, Redbeta forms a shallow right-handed helix. The presence of single-stranded DNA (ssDNA) disrupts this structure. Upon addition of a second complementary ssDNA, annealing generates a left-handed helix that incorporates 14 Redbeta monomers per helical turn, with each Redbeta monomer annealing approximately 11 bp of DNA. The smallest stable annealing intermediate requires 20 bp DNA and two Redbeta monomers. Hence, we propose that Redbeta promotes base pairing by first increasing the number of transient interactions between ssDNAs. Then, annealing is promoted by the binding of a second Redbeta monomer, which nucleates the formation of a stable annealing intermediate. Using threading, we identify sequence similarities between the RecT/Redbeta and the Rad52 families, which strengthens previous suggestions, based on similarities of their quaternary structures, that they share a common mode of action. Hence, our findings have implications for a common mechanism of DNA annealing mediated by single-strand annealing proteins including Rad52.
Thin film design using simulated annealing and study of the filter robustness
Boudet, Thierry; Chaton, Patrick
1996-08-01
Modern optical components require sophisticated coatings with tough specifications and the design of optical multilayers has become a key activity of most laboratories and factories. A synthesis technique based on the simulated annealing algorithm is presented here. In this stochastic minimization, no starting solution is required, only the materials and technological constraints need to be specified. Moreover, the algorithm will always reach the final result. As simulated annealing is a stochastic algorithm, a great amount of state transitions is needed in order to reach a global minimum of the merit function used to evaluate the difference between the optical target and the calculated filter. Anyway the computing time remains reasonable on a work-station. A few examples will show the performances of our program. It also has to be pointed out that no refinement is needed at the end of the annealing because the solution is already highly optimized. Nowadays the design of robust filters with low sensitivity to technological variations remains a key factor for manufacturers. This is why we have established some criteria that quantify the robustness of the stacks. It also enables comparison of multilayers synthesized by different methods and corresponding to the same target.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Adaptive resolution simulation of oligonucleotides
Netz, Paulo A.; Potestio, Raffaello; Kremer, Kurt
2016-12-01
Nucleic acids are characterized by a complex hierarchical structure and a variety of interaction mechanisms with other molecules. These features suggest the need of multiscale simulation methods in order to grasp the relevant physical properties of deoxyribonucleic acid (DNA) and RNA using in silico experiments. Here we report an implementation of a dual-resolution modeling of a DNA oligonucleotide in physiological conditions; in the presented setup only the nucleotide molecule and the solvent and ions in its proximity are described at the atomistic level; in contrast, the water molecules and ions far from the DNA are represented as computationally less expensive coarse-grained particles. Through the analysis of several structural and dynamical parameters, we show that this setup reliably reproduces the physical properties of the DNA molecule as observed in reference atomistic simulations. These results represent a first step towards a realistic multiscale modeling of nucleic acids and provide a quantitatively solid ground for their simulation using dual-resolution methods.
Energy Technology Data Exchange (ETDEWEB)
Nandipati, Giridhar, E-mail: giridhar.nandipati@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA (United States); Setyawan, Wahyu; Heinisch, Howard L. [Pacific Northwest National Laboratory, Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Laboratory, Richland, WA (United States); Department of Physics, University of Washington, Seattle, WA 98195 (United States); Kurtz, Richard J. [Pacific Northwest National Laboratory, Richland, WA (United States); Wirth, Brian D. [University of Tennessee, Knoxville, TN (United States)
2015-07-15
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
Energy Technology Data Exchange (ETDEWEB)
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-07-01
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
Kumar, Pushpendra; Huber, Patrick
2016-04-01
Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.
Kerr, I. D.; Sankararamakrishnan, R; Smart, O.S.; Sansom, M S
1994-01-01
A parallel bundle of transmembrane (TM) alpha-helices surrounding a central pore is present in several classes of ion channel, including the nicotinic acetylcholine receptor (nAChR). We have modeled bundles of hydrophobic and of amphipathic helices using simulated annealing via restrained molecular dynamics. Bundles of Ala20 helices, with N = 4, 5, or 6 helices/bundle were generated. For all three N values the helices formed left-handed coiled coils, with pitches ranging from 160 A (N = 4) to...
Hansen, S H
2004-01-01
We present a user-friendly tool for the analysis of data from Sunyaev-Zeldovich effect observations. The tool is based on the stochastic method of simulated annealing, and allows the extraction of the central values and error-bars of the 3 SZ parameters, Comptonization parameter, y, peculiar velocity, v_p, and electron temperature, T_e. The f77-code SASZ will allow any number of observing frequencies and spectral band shapes. As an example we consider the SZ parameters for the COMA cluster.
Design of phase plates for shaping partially coherent beams by simulated annealing
Institute of Scientific and Technical Information of China (English)
Li Jian-Long; Lü Bai-Da
2008-01-01
Taking the Gaussian Schell-model beam as a typical example of partially coherent beams,this paper applies the simulated annealing (SA) algorithm to the design of phase plates for shaping partially coherent beams.A flow diagram is presented to illustrate the procedure of phase optimization by the SA algorithm.Numerical examples demonstrate the advantages of the SA algorithm in shaping partially coherent beams.An uniform flat-topped beam profile with maximum reconstruction error RE < 1.74% is achieved.A further extension of the approach is discussed.
Energy Technology Data Exchange (ETDEWEB)
Fleischer, M.; Jacobson, S.
1994-12-31
This paper presents a new empirical approach designed to illustrate the theory developed in Fleischer and Jacobson regarding entropy measures and the finite-time performance of the simulated annealing (SA) algorithm. The theory is tested using several experimental methodologies based on a new structure, generic configuration spaces, and polynomial transformations between NP-hard problems. Both approaches provide several ways to alter the configuration space and its associated entropy measure while preserving the value of the globally optimal solution. This makes it possible to illuminate the extent to which entropy measures impact the finite-time performance of the SA algorithm.
Simulated annealing applied to two-dimensional low-beta reduced magnetohydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Chikasue, Y., E-mail: chikasue@ppl.k.u-tokyo.ac.jp [Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561 (Japan); Furukawa, M., E-mail: furukawa@damp.tottori-u.ac.jp [Graduate School of Engineering, Tottori University, Minami 4-101, Koyama-cho, Tottori-shi, Tottori 680-8552 (Japan)
2015-02-15
The simulated annealing (SA) method is applied to two-dimensional (2D) low-beta reduced magnetohydrodynamics (R-MHD). We have successfully obtained stationary states of the system numerically by the SA method with Casimir invariants preserved. Since the 2D low-beta R-MHD has two fields, the relaxation process becomes complex compared to a single field system such as 2D Euler flow. The obtained stationary state can have fine structure. We have found that the fine structure appears because the relaxation processes are different between kinetic energy and magnetic energy.
Institute of Scientific and Technical Information of China (English)
Zhao Zhi-Jin; Zheng Shi-Lian; Xu Chun-Yun; Kong Xian-Zheng
2007-01-01
Hidden Markov models (HMMs) have been used to model burst error sources of wireless channels. This paper proposes a hybrid method of using genetic algorithm (GA) and simulated annealing (SA) to train HMM for discrete channel modelling. The proposed method is compared with pure GA, and experimental results show that the HMMs trained by the hybrid method can better describe the error sequences due to SA's ability of facilitating hill-climbing at the later stage of the search. The burst error statistics of the HMMs trained by the proposed method and the corresponding error sequences are also presented to validate the proposed method.
MASTR: multiple alignment and structure prediction of non-coding RNAs using simulated annealing
DEFF Research Database (Denmark)
Lindgreen, Stinus; Gardner, Paul P; Krogh, Anders
2007-01-01
, it is known that RNA structure is often evolutionarily more conserved than sequence. However, few existing methods are capable of simultaneously considering multiple sequence alignment and structure prediction. RESULT: We present a novel solution to the problem of simultaneous structure prediction...... and multiple alignment of RNA sequences. Using Markov chain Monte Carlo in a simulated annealing framework, the algorithm MASTR (Multiple Alignment of STructural RNAs) iteratively improves both sequence alignment and structure prediction for a set of RNA sequences. This is done by minimizing a combined cost...
Comparing of the Deterministic Simulated Annealing Methods for Quadratic Assignment Problem
Directory of Open Access Journals (Sweden)
Mehmet Güray ÜNSAL
2013-08-01
Full Text Available In this study, Threshold accepting and Record to record travel methods belonging to Simulated Annealing that is meta-heuristic method by applying Quadratic Assignment Problem are statistically analyzed whether they have a significant difference with regard to the values of these two methods target functions and CPU time. Between the two algorithms, no significant differences are found in terms of CPU time and the values of these two methods target functions. Consequently, on the base of Quadratic Assignment Problem, the two algorithms are compared in the study have the same performance in respect to CPU time and the target functions values
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Temporary Workforce Planning with Firm Contracts: A Model and a Simulated Annealing Heuristic
Directory of Open Access Journals (Sweden)
Muhammad Al-Salamah
2011-01-01
Full Text Available The aim of this paper is to introduce a model for temporary staffing when temporary employment is managed by firm contracts and to propose a simulated annealing-based method to solve the model. Temporary employment is a policy frequently used to adjust the working hour capacity to fluctuating demand. Temporary workforce planning models have been unnecessarily simplified to account for only periodic hiring and laying off; a company can review its workforce requirement every period and make hire-fire decisions accordingly, usually with a layoff cost. We present a more realistic temporary workforce planning model that assumes a firm contract between the worker and the company, which can extend to several periods. The model assumes the traditional constraints, such as inventory balance constraints, worker availability, and labor hour mix. The costs are the inventory holding cost, training cost of the temporary workers, and the backorder cost. The mixed integer model developed for this case has been found to be difficult to solve even for small problem sizes; therefore, a simulated annealing algorithm is proposed to solve the mixed integer model. The performance of the SA algorithm is compared with the CPLEX solution.
Design and optimization of solid rocket motor Finocyl grain using simulated annealing
Institute of Scientific and Technical Information of China (English)
Ali Kamran; LIANG Guo-zhu
2011-01-01
The research effort outlined the application of a computer aided design (CAD)-centric technique to the design and optimization of solid rocket motor Finocyl (fin in cylinder) grain using simulated annealing.The proper method for constructing the grain configuration model, ballistic performance and optimizer integration for analysis was presented. Finoeyl is a complex grain configuration, requiring thirteen variables to define the geometry. The large number of variables not only complicates the geometrical construction but also optimization process. CAD representation encapsulates all of the geometric entities pertinent to the grain design in a parametric way, allowing manipulation of grain entity (web), performing regression and automating geometrical data calculations. Robustness to avoid local minima and efficient capacity to explore design space makes simulated annealing an attractive choice as optimizer. It is demonstrated with a constrained optimization of Finocyl grain geometry for homogeneous, isotropic propellant, uniform regression, and a quasi-steady, bulk mode internal ballistics model that maximizes average thrust for required deviations from neutrality.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)
2015-02-03
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
Directory of Open Access Journals (Sweden)
Kai Moriguchi
2015-01-01
Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.
Research on Optimal Control for the Vehicle Suspension Based on the Simulated Annealing Algorithm
Directory of Open Access Journals (Sweden)
Jie Meng
2014-01-01
Full Text Available A method is designed to optimize the weight matrix of the LQR controller by using the simulated annealing algorithm. This method utilizes the random searching characteristics of the algorithm to optimize the weight matrices with the target function of suspension performance indexes. This method improves the design efficiency and control performance of the LQR control, and solves the problem of the LQR controller when defining the weight matrices. And a simulation is provided for vehicle active chassis control. The result shows that the active suspension using LQR optimized by the genetic algorithm compared to the chassis controlled by the normal LQR and the passive one, shows better performance. Meanwhile, the problem of defining the weight matrices is greatly solved.
Kinetic Monte Carlo simulations of boron activation in implanted Si under laser thermal annealing
Fisicaro, Giuseppe; Pelaz, Lourdes; Aboy, Maria; Lopez, Pedro; Italia, Markus; Huet, Karim; Cristiano, Filadelfo; Essa, Zahi; Yang, Qui; Bedel-Pereira, Elena; Quillec, Maurice; La Magna, Antonino
2014-02-01
We investigate the correlation between dopant activation and damage evolution in boron-implanted silicon under excimer laser irradiation. The dopant activation efficiency in the solid phase was measured under a wide range of irradiation conditions and simulated using coupled phase-field and kinetic Monte Carlo models. With the inclusion of dopant atoms, the presented code extends the capabilities of a previous version, allowing its definitive validation by means of detailed comparisons with experimental data. The stochastic method predicts the post-implant kinetics of the defect-dopant system in the far-from-equilibrium conditions caused by laser irradiation. The simulations explain the dopant activation dynamics and demonstrate that the competitive dopant-defect kinetics during the first laser annealing treatment dominates the activation phenomenon, stabilizing the system against additional laser irradiation steps.
A Simulated Annealing Based Location Area Optimization in Next Generation Mobile Networks
Directory of Open Access Journals (Sweden)
Vilmos Simon
2007-01-01
Full Text Available Mobile networks have faced rapid increase in the number of mobile users and the solution for supporting the growing population is to reduce the cell sizes and to increase the bandwidth reuse. This will cause the number of location management operations and call deliveries to increase significantly, and result in high signaling overhead. We focus on minimizing this overhead, by efficient Location Area Planning (LAP. In this paper we seek to determine the location areas to achieve the minimization of the registration cost, constrained by the paging cost. For that we propose a simulated annealing algorithm, which is applied on a basic Location Area partition of cells formed by a greedy algorithm. We used our realistic mobile environment simulator to generate input (cell changing and incoming call statistics for our algorithm, and by comparing the values of the registration cost function we recognized that significant reduction was achieved in the amount of the signaling traffic.
Adaptive Optics Simulations for Siding Spring
Goodwin, Michael; Lambert, Andrew
2012-01-01
Using an observational derived model optical turbulence profile (model-OTP) we have investigated the performance of Adaptive Optics (AO) at Siding Spring Observatory (SSO), Australia. The simulations cover the performance for AO techniques of single conjugate adaptive optics (SCAO), multi-conjugate adaptive optics (MCAO) and ground-layer adaptive optics (GLAO). The simulation results presented in this paper predict the performance of these AO techniques as applied to the Australian National University (ANU) 2.3 m and Anglo-Australian Telescope (AAT) 3.9 m telescopes for astronomical wavelength bands J, H and K. The results indicate that AO performance is best for the longer wavelengths (K-band) and in the best seeing conditions (sub 1-arcsecond). The most promising results are found for GLAO simulations (field of view of 180 arcsecs), with the field RMS for encircled energy 50% diameter (EE50d) being uniform and minimally affected by the free-atmosphere turbulence. The GLAO performance is reasonably good over...
Energy Technology Data Exchange (ETDEWEB)
Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu
2001-08-01
This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)
Retrieval of Surface and Subsurface Moisture of Bare Soil Using Simulated Annealing
Tabatabaeenejad, A.; Moghaddam, M.
2009-12-01
Soil moisture is of fundamental importance to many hydrological and biological processes. Soil moisture information is vital to understanding the cycling of water, energy, and carbon in the Earth system. Knowledge of soil moisture is critical to agencies concerned with weather and climate, runoff potential and flood control, soil erosion, reservoir management, water quality, agricultural productivity, drought monitoring, and human health. The need to monitor the soil moisture on a global scale has motivated missions such as Soil Moisture Active and Passive (SMAP) [1]. Rough surface scattering models and remote sensing retrieval algorithms are essential in study of the soil moisture, because soil can be represented as a rough surface structure. Effects of soil moisture on the backscattered field have been studied since the 1960s, but soil moisture estimation remains a challenging problem and there is still a need for more accurate and more efficient inversion algorithms. It has been shown that the simulated annealing method is a powerful tool for inversion of the model parameters of rough surface structures [2]. The sensitivity of this method to measurement noise has also been investigated assuming a two-layer structure characterized by the layers dielectric constants, layer thickness, and statistical properties of the rough interfaces [2]. However, since the moisture profile varies with depth, it is sometimes necessary to model the rough surface as a layered structure with a rough interface on top and a stratified structure below where each layer is assumed to have a constant volumetric moisture content. In this work, we discretize the soil structure into several layers of constant moisture content to examine the effect of subsurface profile on the backscattering coefficient. We will show that while the moisture profile could vary in deeper layers, these layers do not affect the scattered electromagnetic field significantly. Therefore, we can use just a few layers
Minimizing distortion and internal forces in truss structures by simulated annealing
Kincaid, Rex K.
1989-01-01
Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and
Statistical mechanics of Hamiltonian adaptive resolution simulations.
Español, P; Delgado-Buscalioni, R; Everaers, R; Potestio, R; Donadio, D; Kremer, K
2015-02-14
The Adaptive Resolution Scheme (AdResS) is a hybrid scheme that allows to treat a molecular system with different levels of resolution depending on the location of the molecules. The construction of a Hamiltonian based on the this idea (H-AdResS) allows one to formulate the usual tools of ensembles and statistical mechanics. We present a number of exact and approximate results that provide a statistical mechanics foundation for this simulation method. We also present simulation results that illustrate the theory.
Application of simulated annealing to solve multi-objectives for aggregate production planning
Atiya, Bayda; Bakheet, Abdul Jabbar Khudhur; Abbas, Iraq Tereq; Bakar, Mohd. Rizam Abu; Soon, Lee Lai; Monsi, Mansor Bin
2016-06-01
Aggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and requires minimal time.
Simulated Annealing for Ground State Energy of Ionized Donor Bound Excitons in Semiconductors
Institute of Scientific and Technical Information of China (English)
YANHai-Qing; TANGChen; LIUMing; ZHANGHao; ZHANGGui-Min
2004-01-01
We present a global optimization method, called the simulated annealing, to the ground state energies of excitons. The proposed method does not require the partial derivatives with respect to each variational parameter or solving an eigenequation, so the present method is simpler in software programming than the variational method,and overcomes the major difficulties. The ground state energies of ionized-donor-bound excitons (D+,X) have beencal culated variationally for all values of effective electron-to-hole mass ratio σ. They are compared with those obtained by the variational method. The results obtained demonstrate that the proposed method is simple, accurate, and has more advantages than the traditional methods in calculation.
Simulated Annealing for Ground State Energy of Ionized Donor Bound Excitons in Semiconductors
Institute of Scientific and Technical Information of China (English)
YAN Hai-Qing; TANG Chen; LIU Ming; ZHANG Hao; ZHANG Gui-Min
2004-01-01
We present a global optimization method, called the simulated annealing, to the ground state energies of excitons. The proposed method does not require the partial derivatives with respect to each variational parameter or solving an eigenequation, so the present method is simpler in software programming than the variational method,and overcomes the major difficulties. The ground state energies of ionized-donor-bound excitons (D+, X) have been calculated variationally for all values of effective electron-to-hole mass ratio σ. They are compared with those obtained by the variational method. The results obtained demonstrate that the proposed method is simple, accurate, and has more advantages than the traditional methods in calculation.
Huq, Ashfia; Stephens, P W
2003-02-01
Recent advances in crystallographic computing and availability of high-resolution diffraction data have made it relatively easy to solve crystal structures from powders that would have traditionally required single crystal samples. The success of direct space methods depends heavily on starting with an accurate molecular model. In this paper we address the applicability of using these methods in finding subtleties such as disorder in the molecular conformation that might not be known a priori. We use ranitidine HCl as our test sample as it is known to have a conformational disorder from single crystal structural work. We redetermine the structure from powder data using simulated annealing and show that the conformational disorder is clearly revealed by this method.
Directory of Open Access Journals (Sweden)
Vasios C.E.
2003-01-01
Full Text Available In the present work, a new method for the classification of Event Related Potentials (ERPs is proposed. The proposed method consists of two modules: the feature extraction module and the classification module. The feature extraction module comprises the implementation of the Multivariate Autoregressive model in conjunction with the Simulated Annealing technique, for the selection of optimum features from ERPs. The classification module is implemented with a single three-layer neural network, trained with the back-propagation algorithm and classifies the data into two classes: patients and control subjects. The method, in the form of a Decision Support System (DSS, has been thoroughly tested to a number of patient data (OCD, FES, depressives and drug users, resulting successful classification up to 100%.
Fabrication of simulated plate fuel elements: Defining role of stress relief annealing
Kohli, D.; Rakesh, R.; Sinha, V. P.; Prasad, G. J.; Samajdar, I.
2014-04-01
This study involved fabrication of simulated plate fuel elements. Uranium silicide of actual fuel elements was replaced with yttria. The fabrication stages were otherwise identical. The final cold rolled and/or straightened plates, without stress relief, showed an inverse relationship between bond strength and out of plane residual shear stress (τ13). Stress relief of τ13 was conducted over a range of temperatures/times (200-500 °C and 15-240 min) and led to corresponding improvements in bond strength. Fastest τ13 relief was obtained through 300 °C annealing. Elimination of microscopic shear bands, through recovery and partial recrystallization, was clearly the most effective mechanism of relieving τ13.
Shape optimization of road tunnel cross-section by simulated annealing
Directory of Open Access Journals (Sweden)
Sobótka Maciej
2016-06-01
Full Text Available The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA. The form of a cost function derives from the energetic optimality condition, formulated in the authors’ previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
Evaluating strong measurement noise in data series with simulated annealing method
Carvalho, J; Haase, M; Lind, P G
2013-01-01
Many stochastic time series can be described by a Langevin equation composed of a deterministic and a stochastic dynamical part. Such a stochastic process can be reconstructed by means of a recently introduced nonparametric method, thus increasing the predictability, i.e. knowledge of the macroscopic drift and the microscopic diffusion functions. If the measurement of a stochastic process is affected by additional strong measurement noise, the reconstruction process cannot be applied. Here, we present a method for the reconstruction of stochastic processes in the presence of strong measurement noise, based on a suitably parametrized ansatz. At the core of the process is the minimization of the functional distance between terms containing the conditional moments taken from measurement data, and the corresponding ansatz functions. It is shown that a minimization of the distance by means of a simulated annealing procedure yields better results than a previously used Levenberg-Marquardt algorithm, which permits a...
COLSS Axial Power Distribution Synthesis using Artificial Neural Network with Simulated Annealing
Energy Technology Data Exchange (ETDEWEB)
Shim, K. W.; Oh, D. Y.; Kim, D. S.; Choi, Y. J.; Park, Y. H. [KEPCO Nuclear Fuel Company, Inc., Daejeon (Korea, Republic of)
2015-05-15
The core operating limit supervisory system (COLSS) is an application program implemented into the plant monitoring system (PMS) of nuclear power plants (NPPs). COLSS aids the operator in maintaining plant operation within selected limiting conditions for operation (LCOs), such as the departure from nucleate boiling ratio (DNBR) margin and the linear heat rate (LHR) margin. In order to calculate above LCOs, the COLSS uses core averaged axial power distribution (APD). 40 nodes of APD is synthesized by using the 5-level in-core neutron flux detector signals based on the Fourier series method in the COLSS. We proposed the artificial neural network (ANN) with simulated annealing (SA) method instead of Fourier series method to synthesize the axial power distribution (APD) of COLSS. The proposed method is more accurate than the current method as the results of the axial shape RMS errors.
Application of simulated annealing algorithm to improve work roll wear model in plate mills
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Employing Simulated Annealing Algorithm (SAA) and many measured data, a calculation model of work roll wear was built in the 2 800 mm 4-high mill of Wuhan Iron and Steel (Group) Co.(WISCO). The model was a semi-theory practical formula. Its pattern and magnitude were still hardly defined with classical optimization methods. But the problem could be resolved by SAA. It was pretty high precision to predict the values for the wear profiles of work roll in a rolling unit. Afterone-year application, the results show that the model is feasible in engineering, and it can be applied to predict the wear profiles of work roll in other mills
An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm
Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong
2017-03-01
Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.
Simulated annealing for three-dimensional low-beta reduced MHD equilibria in cylindrical geometry
Furukawa, M
2016-01-01
Simulated annealing (SA) is applied for three-dimensional (3D) equilibrium calculation of ideal, low-beta reduced MHD in cylindrical geometry. The SA is based on the theory of Hamiltonian mechanics. The dynamical equation of the original system, low-beta reduced MHD in this study, is modified so that the energy changes monotonically while preserving the Casimir invariants in the artificial dynamics. An equilibrium of the system is given by an extremum of the energy, therefore SA can be used as a method for calculating ideal MHD equilibrium. Previous studies demonstrated that the SA succeeds to lead to various MHD equilibria in two dimensional rectangular domain. In this paper, the theory is applied to 3D equilibrium of ideal, low-beta reduced MHD. An example of equilibrium with magnetic islands, obtained as a lower energy state, is shown. Several versions of the artificial dynamics are developed that can effect smoothing.
A hybrid Tabu search-simulated annealing method to solve quadratic assignment problem
Directory of Open Access Journals (Sweden)
Mohamad Amin Kaviani
2014-06-01
Full Text Available Quadratic assignment problem (QAP has been considered as one of the most complicated problems. The problem is NP-Hard and the optimal solutions are not available for large-scale problems. This paper presents a hybrid method using tabu search and simulated annealing technique to solve QAP called TABUSA. Using some well-known problems from QAPLIB generated by Burkard et al. (1997 [Burkard, R. E., Karisch, S. E., & Rendl, F. (1997. QAPLIB–a quadratic assignment problem library. Journal of Global Optimization, 10(4, 391-403.], two methods of TABUSA and TS are both coded on MATLAB and they are compared in terms of relative percentage deviation (RPD for all instances. The performance of the proposed method is examined against Tabu search and the preliminary results indicate that the hybrid method is capable of solving real-world problems, efficiently.
Directory of Open Access Journals (Sweden)
Jingwei Song
2014-01-01
Full Text Available A simulated annealing (SA based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN, and partial least square support vector machine (PLS-SVM to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model, 12.93% (ANN, and 12.94% (PLS-SVM to 9.38%. Five-week average has been raised from 13.02% (chaotic model, 15.69% (ANN, and 15.92% (PLS-SVM to 11.27%.
Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.
2015-12-01
Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could
Adaptive resolution simulation of liquid water
Energy Technology Data Exchange (ETDEWEB)
Praprotnik, Matej [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Matysiak, Silvina [Department of Chemistry, Rice University, 6100 Main Street, Houston, TX 77005 (United States); Delle Site, Luigi [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Kremer, Kurt [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Clementi, Cecilia [Department of Chemistry, Rice University, 6100 Main Street, Houston, TX 7700 (United States)
2007-07-25
Water plays a central role in biological systems and processes, and is equally relevant in a large range of industrial and technological applications. Being the most important natural solvent, its presence uniquely influences biological function as well as technical processes. Because of their importance, aqueous solutions are among the most experimentally and theoretically studied systems. However, many questions still remain open. Both experiments and theoretical models are usually restricted to specific cases. In particular all-atom simulations of biomolecules and materials in water are computationally very expensive and often not possible, mainly due to the computational effort to obtain water-water interactions in regions not relevant for the problem under consideration. In this paper we present a coarse-grained model that can reproduce the behaviour of liquid water at a standard temperature and pressure remarkably well. The model is then used in a multiscale simulation of liquid water, where a spatially adaptive molecular resolution procedure allows one to change from a coarse-grained to an all-atom representation on-the-fly. We show that this approach leads to the correct description of essential thermodynamic and structural properties of liquid water. Our adaptive multiscale scheme allows for significantly greater extensive simulations than existing approaches by taking explicit water into account only in the regions where the atomistic details are physically relevant. (fast track communication)
Elemental thin film depth profiles by ion beam analysis using simulated annealing - a new tool
Energy Technology Data Exchange (ETDEWEB)
Jeynes, C [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom); Barradas, N P [Instituto Tecnologico e Nuclear, E.N. 10, Sacavem (Portugal); Marriott, P K [Department of Statistics, National University of Singapore, Singapore (Singapore); Boudreault, G [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom); Jenkin, M [School of Electronics Computing and Mathematics, University of Surrey, Guildford (United Kingdom); Wendler, E [Friedrich-Schiller-Universitaet Jena, Institut fuer Festkoerperphysik, Jena (Germany); Webb, R P [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom)
2003-04-07
Rutherford backscattering spectrometry (RBS) and related techniques have long been used to determine the elemental depth profiles in films a few nanometres to a few microns thick. However, although obtaining spectra is very easy, solving the inverse problem of extracting the depth profiles from the spectra is not possible analytically except for special cases. It is because these special cases include important classes of samples, and because skilled analysts are adept at extracting useful qualitative information from the data, that ion beam analysis is still an important technique. We have recently solved this inverse problem using the simulated annealing algorithm. We have implemented the solution in the 'IBA DataFurnace' code, which has been developed into a very versatile and general new software tool that analysts can now use to rapidly extract quantitative accurate depth profiles from real samples on an industrial scale. We review the features, applicability and validation of this new code together with other approaches to handling IBA (ion beam analysis) data, with particular attention being given to determining both the absolute accuracy of the depth profiles and statistically accurate error estimates. We include examples of analyses using RBS, non-Rutherford elastic scattering, elastic recoil detection and non-resonant nuclear reactions. High depth resolution and the use of multiple techniques simultaneously are both discussed. There is usually systematic ambiguity in IBA data and Butler's example of ambiguity (1990 Nucl. Instrum. Methods B 45 160-5) is reanalysed. Analyses are shown: of evaporated, sputtered, oxidized, ion implanted, ion beam mixed and annealed materials; of semiconductors, optical and magnetic multilayers, superconductors, tribological films and metals; and of oxides on Si, mixed metal silicides, boron nitride, GaN, SiC, mixed metal oxides, YBCO and polymers. (topical review)
Hamiltonian adaptive resolution simulation for molecular liquids.
Potestio, Raffaello; Fritsch, Sebastian; Español, Pep; Delgado-Buscalioni, Rafael; Kremer, Kurt; Everaers, Ralf; Donadio, Davide
2013-03-08
Adaptive resolution schemes allow the simulation of a molecular fluid treating simultaneously different subregions of the system at different levels of resolution. In this work we present a new scheme formulated in terms of a global Hamiltonian. Within this approach equilibrium states corresponding to well-defined statistical ensembles can be generated making use of all standard molecular dynamics or Monte Carlo methods. Models at different resolutions can thus be coupled, and thermodynamic equilibrium can be modulated keeping each region at desired pressure or density without disrupting the Hamiltonian framework.
Lutsyshyn, Yaroslav
2016-01-01
We developed a CUDA-based parallelization of the annealing method for the inverse Laplace transform problem. The algorithm is based on annealing algorithm and minimizes residue of the reconstruction of the spectral function. We introduce local updates which preserve first two sum rules and allow an efficient parallel CUDA implementation. Annealing is performed with the Monte Carlo method on a population of Markov walkers. We propose imprinted branching method to improve further the convergence of the anneal. The algorithm is tested on truncated double-peak Lorentzian spectrum with examples of how the error in the input data affects the reconstruction.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
A Multi-Operator Based Simulated Annealing Approach For Robot Navigation in Uncertain Environments
Directory of Open Access Journals (Sweden)
Hui Miao
2010-04-01
Full Text Available Optimization methods such as simulated annealing (SA and genetic algorithm(GA are used for solving optimization problems. However, the computationalprocessing time is crucial for the real-time applications such as mobile robots. Amulti-operator based SA approach incorporating with additional fourmathematical operators that can find the optimal path for robots in dynamicenvironments is proposed in this paper. It requires less computation times whilegiving better trade-offs among simplicity, far-field accuracy, and computationalcost. The contributions of the work include the implementing of the simulatedannealing algorithm for robot path planning in dynamic environments, and theenhanced new path planner for improving the efficiency of the path planningalgorithm. The simulation results are compared with the previous publishedclassic SA approach and the GA approach. The multi-operator based SA (MSAapproach is demonstrated through case studies not only to be effective inobtaining the optimal solution but also to be more efficient in both off-line and onlineprocessing for robot dynamic path planning.
Energy Technology Data Exchange (ETDEWEB)
Chiapetto, M. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium); Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Becquart, C.S. [Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Domain, C. [EDF R and D, Departement Materiaux et Mecanique des Composants, Les Renardieres, Moret sur Loing (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium)
2015-01-01
Post-irradiation annealing experiments are often used to obtain clearer information on the nature of defects produced by irradiation. However, their interpretation is not always straightforward without the support of physical models. We apply here a physically-based set of parameters for object kinetic Monte Carlo (OKMC) simulations of the nanostructural evolution of FeMnNi alloys under irradiation to the simulation of their post-irradiation isochronal annealing, from 290 to 600 C. The model adopts a ''grey alloy'' scheme, i.e. the solute atoms are not introduced explicitly, only their effect on the properties of point-defect clusters is. Namely, it is assumed that both vacancy and SIA clusters are significantly slowed down by the solutes. The slowing down increases with size until the clusters become immobile. Specifically, the slowing down of SIA clusters by Mn and Ni can be justified in terms of the interaction between these atoms and crowdions in Fe. The results of the model compare quantitatively well with post-irradiation isochronal annealing experimental data, providing clear insight into the mechanisms that determine the disappearance or re-arrangement of defects as functions of annealing time and temperature. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Institute of Scientific and Technical Information of China (English)
魏关锋; 姚平经; LUOXing; ROETZELWilfried
2004-01-01
The multi-stream heat exchanger network synthesis (HENS) problem can be formulated as a mixed integer nonlinear programming model according to Yee et al. Its nonconvexity nature leads to existence of more than one optimum and computational difficulty for traditional algorithms to find the global optimum. Compared with deterministic algorithms, evolutionary computation provides a promising approach to tackle this problem. In this paper, a mathematical model of multi-stream heat exchangers network synthesis problem is setup. Different from the assumption of isothermal mixing of stream splits and thus linearity constraints of Yee et al., non-isothermal mixing is supported. As a consequence, nonlinear constraints are resulted and nonconvexity of the objective function is added. To solve the mathematical model, an algorithm named GA/SA (parallel genetic/simulated annealing algorithm) is detailed for application to the multi-stream heat exchanger network synthesis problem. The performance of the proposed approach is demonstrated with three examples and the obtained solutions indicate the presented approach is effective for multi-stream HENS.
Kang, Jiyoung; Yamasaki, Kazuhiko; Sano, Kuniaki; Tsutsui, Ken; Tsutsui, Kimiko M.; Tateno, Masaru
2017-01-01
Theoretical analyses of multivariate data have become increasingly important in various scientific disciplines. The multivariate curve resolution alternating least-squares (MCR-ALS) method is an integrated and systematic tool to decompose such various types of spectral data to several pure spectra, corresponding to distinct species. However, in the present study, the MCR-ALS calculation provided only unreasonable solutions, when used to process the circular dichroism spectra of double-stranded DNA (228 bp) in the complex with a DNA-binding peptide under various concentrations. To resolve this problem, we developed an algorithm by including a simulated annealing (SA) protocol (the SA-MCR-ALS method), to facilitate the expansion of the sampling space. The analysis successfully decomposed the aforementioned data into three reasonable pure spectra. Thus, our SA-MCR-ALS scheme provides a useful tool for effective extended sampling, to investigate the substantial and detailed properties of various forms of multivariate data with significant difficulties in the degrees of freedom.
Directory of Open Access Journals (Sweden)
Masoud Rabbani
2016-02-01
Full Text Available This paper presents the capacitated Windy Rural Postman Problem with several vehicles. For this problem, two objectives are considered. One of them is the minimization of the total cost of all vehicle routes expressed by the sum of the total traversing cost and another one is reduction of the maximum cost of vehicle route in order to find a set of equitable tours for the vehicles. Mathematical formulation is provided. The multi-objective simulated annealing (MOSA algorithm has been modified for solving this bi-objective NP-hard problem. To increase algorithm performance, Taguchi technique is applied to design experiments for tuning parameters of the algorithm. Numerical experiments are proposed to show efficiency of the model. Finally, the results of the MOSA have been compared with MOCS (multi-objective Cuckoo Search algorithm to validate the performance of the proposed algorithm. The experimental results indicate that the proposed algorithm provides good solutions and performs significantly better than the MOCS.
Simulated annealing (SA to vehicle routing problems with soft time windows
Directory of Open Access Journals (Sweden)
Suphan Sodsoon
2014-12-01
Full Text Available The researcher has applied and develops the meta-heuristics method to solve Vehicle Routing Problems with Soft Time Windows (VRPSTW. For this case there was only one depot, multi customers which each generally sparse either or demand was different though perceived number of demand and specific period of time to receive them. The Operation Research was representative combinatorial optimization problems and is known to be NP-hard. In this research algorithm, use Simulated Annealing (SA to determine the optimum solutions which rapidly time solving. After developed the algorithms, apply them to examine the factors and the optimum extended time windows and test these factors with vehicle problem routing under specific time windows by Solomon in OR-Library in case of maximum 25 customers. Meanwhile, 6 problems are including of C101, C102, R101, R102, RC101 and RC102 respectively. The result shows the optimum extended time windows at level of 50%. At last, after comparison these answers with the case of vehicle problem routing under specific time windows and flexible time windows, found that percentage errors on number of vehicles approximately by -28.57% and percentage errors on distances approximately by -28.57% which this algorithm spent average processing time on 45.5 sec/problems.
Ghosh, P; Bagchi, M C
2009-01-01
With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.
Qin, Jin; Xiang, Hui; Ye, Yong; Ni, Linglin
2015-01-01
A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others.
Directory of Open Access Journals (Sweden)
A. Mateos
2016-01-01
Full Text Available Technological advances are required to accommodate air traffic control systems for the future growth of air traffic. Particularly, detection and resolution of conflicts between aircrafts is a problem that has attracted much attention in the last decade becoming vital to improve the safety standards in free flight unstructured environments. We propose using the archive simulated annealing-based multiobjective optimization algorithm to deal with such a problem, accounting for three admissible maneuvers (velocity, turn, and altitude changes in a multiobjective context. The minimization of the maneuver number and magnitude, time delays, or deviations in the leaving points are considered for analysis. The optimal values for the algorithm parameter set are identified in the more complex instance in which all aircrafts have conflicts between each other accounting for 5, 10, and 20 aircrafts. Moreover, the performance of the proposed approach is analyzed by means of a comparison with the Pareto front, computed using brute force for 5 aircrafts and the algorithm is also illustrated with a random instance with 20 aircrafts.
Optimasi Coverage SFN pada Pemancar TV Digital DVB-T2 dengan Metode Simulated Annealing
Directory of Open Access Journals (Sweden)
Adib Nur Ikhwan
2013-09-01
Full Text Available Siaran TV digital yang akan diterapkan di Indonesia pada awalnya menggunakan standar DVB-T (Digital Video Broadcasting-Terestrial yang kemudian pada tahun 2012 diganti menjadi DVB-T2 (Digital Video Broadcasting-Terestrial Second Generation. Oleh karena itu, penelitian-penelitian sebelumnya termasuk optimasi coverage TV digital sudah tidak relevan lagi. Coverage merupakan salah satu bagian yang penting dalam siaran TV digital. Pada tugas akhir ini, optimasi coverage SFN (Single Frequency network pada pemancar TV digital diterapkan dengan metode SA (Simulated Annealing. Metode SA berusaha mencari solusi dengan berpindah dari satu solusi ke solusi yang lain, dimana akan dipilih solusi yang mempunyai fungsi energy (fitness yang terkecil. Optimasi dengan metode SA ini dilakukan dengan mengubah-ubah posisi pemancar TV digital sehingga didapatkan posisi yang terbaik. Optimasinya menggunakan 10 cooling schedule dengan melakukan 2 kali tes, baik pada mode FFT 2K ataupun 4K. Hasil yang dicapai dari penelitian ini adalah daerah coverage SFN pada pemancar siaran TV digital DVB-T2 mengalami peningkatan coverage relatif terbaik rata-rata sebesar 2.348% pada cooling schedule 7.
Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing
Directory of Open Access Journals (Sweden)
Khanh Doan V.K.
2014-06-01
Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.
Simulated Annealing-Based Ant Colony Algorithm for Tugboat Scheduling Optimization
Directory of Open Access Journals (Sweden)
Qi Xu
2012-01-01
Full Text Available As the “first service station” for ships in the whole port logistics system, the tugboat operation system is one of the most important systems in port logistics. This paper formulated the tugboat scheduling problem as a multiprocessor task scheduling problem (MTSP after analyzing the characteristics of tugboat operation. The model considers factors of multianchorage bases, different operation modes, and three stages of operations (berthing/shifting-berth/unberthing. The objective is to minimize the total operation times for all tugboats in a port. A hybrid simulated annealing-based ant colony algorithm is proposed to solve the addressed problem. By the numerical experiments without the shifting-berth operation, the effectiveness was verified, and the fact that more effective sailing may be possible if tugboats return to the anchorage base timely was pointed out; by the experiments with the shifting-berth operation, one can see that the objective is most sensitive to the proportion of the shifting-berth operation, influenced slightly by the tugboat deployment scheme, and not sensitive to the handling operation times.
Finding a Hadamard Matrix by Simulated Annealing of Spin-Vectors
Suksmono, Andriyan Bayu
2016-01-01
Reformulation of a combinatorial problem into optimization of a statistical-mechanics system, enables finding a better solution using heuristics derived from a physical process, such as by the SA (Simulated Annealing). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into an SH (Semi-normalized Hadamard) matrix; whose first columns are unity vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin-vectors to represent the SH vectors, which play the role of the spins on the Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin-vectors. Started from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+,-} spin-pair within the SH-spin vectors whi...
Directory of Open Access Journals (Sweden)
Helio Yochihiro Fuchigami
2014-08-01
Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.
An archived multi-objective simulated annealing for a dynamic cellular manufacturing system
Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza
2014-05-01
To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.
Institute of Scientific and Technical Information of China (English)
张火明; 黄赛花; 管卫兵
2014-01-01
The highest similarity degree of static characteristics including both horizontal and vertical restoring force-displacement characteristics of total mooring system, as well as the tension-displacement characteristics of the representative single mooring line between the truncated and full depth system are obtained by annealing simulation algorithm for hybrid discrete variables (ASFHDV, in short). A“baton” optimization approach is proposed by utilizing ASFHDV. After each baton of optimization, if a few dimensional variables reach the upper or lower limit, the boundary of certain dimensional variables shall be expanded. In consideration of the experimental requirements, the length of the upper mooring line should not be smaller than 8 m, and the diameter of the anchor chain on the bottom should be larger than 0.03 m. A 100000 t turret mooring FPSO in the water depth of 304 m, with the truncated water depth being 76 m, is taken as an example of equivalent water depth truncated mooring system optimal design and calculation, and is performed to obtain the conformation parameters of the truncated mooring system. The numerical results indicate that the present truncated mooring system design is successful and effective.
Back-Analysis of Tunnel Response from Field Monitoring Using Simulated Annealing
Vardakos, Sotirios; Gutierrez, Marte; Xia, Caichu
2016-12-01
This paper deals with the use of field monitoring data to improve predictions of tunnel response during and after construction from numerical models. Computational models are powerful tools for the performance-based engineering analysis and design of geotechnical structures; however, the main challenge to their use is the paucity of information to establish input data needed to yield reliable predictions that can be used in the design of geotechnical structures. Field monitoring can offer not only the means to verify modeling results but also faster and more reliable ways to determine model parameters and for improving the reliability of model predictions. Back-analysis involves the determination of parameters required in computational models using field-monitored data, and is particularly suited to underground constructions, where more information about ground conditions and response becomes available as the construction progresses. A crucial component of back-analysis is an algorithm to find a set of input parameters that will minimize the difference between predicted and measured performance (e.g., in terms of deformations, stresses, or tunnel support loads). Methods of back-analysis can be broadly classified as direct and gradient-based optimization techniques. An alternative methodology to carry out the nonlinear optimization involved in back-analyses is the use of heuristic techniques. Heuristic methods refer to experience-based techniques for problem-solving, learning, and discovery that find a solution which is not guaranteed to be fully optimal, but good enough for a given set of goals. This paper focuses on the use of the heuristic simulated annealing (SA) method in the back-analysis of tunnel responses from field-monitored data. SA emulates the metallurgical processing of metals such as steel by annealing, which involves a gradual and sufficiently slow cooling of a metal from the heated phase which leads to a final material with a minimum imperfections
Adaptive Resolution Simulation in Equilibrium and Beyond
Wang, Han
2014-01-01
In this paper, we investigate the equilibrium statistical properties of both the force and potential interpolations of adaptive resolution simulation (AdResS) under the theoretical framework of grand-canonical like AdResS (GC-AdResS). The thermodynamic relations between the higher and lower resolutions are derived by considering the absence of fundamental conservation laws in mechanics for both branches of AdResS. In order to investigate the applicability of AdResS method in studying the properties beyond the equilibrium, we demonstrate the accuracy of AdResS in computing the dynamical properties in two numerical examples: The velocity auto-correlation of pure water and the conformational relaxation of alanine dipeptide dissolved in water. Theoretical and technical open questions of the AdResS method are discussed in the end of the paper.
Afanasiev, M.; Pratt, R. G.; Kamei, R.; McDowell, G.
2012-12-01
Crosshole seismic tomography has been used by Vale to provide geophysical images of mineralized massive sulfides in the Eastern Deeps deposit at Voisey's Bay, Labrador, Canada. To date, these data have been processed using traveltime tomography, and we seek to improve the resolution of these images by applying acoustic Waveform Tomography. Due to the computational cost of acoustic waveform modelling, local descent algorithms are employed in Waveform Tomography; due to non-linearity an initial model is required which predicts first-arrival traveltimes to within a half-cycle of the lowest frequency used. Because seismic velocity anisotropy can be significant in hardrock settings, the initial model must quantify the anisotropy in order to meet the half-cycle criterion. In our case study, significant velocity contrasts between the target massive sulfides and the surrounding country rock led to difficulties in generating an accurate anisotropy model through traveltime tomography, and our starting model for Waveform Tomography failed the half-cycle criterion at large offsets. We formulate a new, semi-global approach for finding the best-fit 1-D elliptical anisotropy model using simulated annealing. Through random perturbations to Thompson's ɛ parameter, we explore the L2 norm of the frequency-domain phase residuals in the space of potential anisotropy models: If a perturbation decreases the residuals, it is always accepted, but if a perturbation increases the residuals, it is accepted with the probability P = exp(-(Ei-E)/T). This is the Metropolis criterion, where Ei is the value of the residuals at the current iteration, E is the value of the residuals for the previously accepted model, and T is a probability control parameter, which is decreased over the course of the simulation via a preselected cooling schedule. Convergence to the global minimum of the residuals is guaranteed only for infinitely slow cooling, but in practice good results are obtained from a variety
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Directory of Open Access Journals (Sweden)
Maurer Till
2005-04-01
Full Text Available Abstract Background We have developed the program PERMOL for semi-automated homology modeling of proteins. It is based on restrained molecular dynamics using a simulated annealing protocol in torsion angle space. As main restraints defining the optimal local geometry of the structure weighted mean dihedral angles and their standard deviations are used which are calculated with an algorithm described earlier by Döker et al. (1999, BBRC, 257, 348–350. The overall long-range contacts are established via a small number of distance restraints between atoms involved in hydrogen bonds and backbone atoms of conserved residues. Employing the restraints generated by PERMOL three-dimensional structures are obtained using standard molecular dynamics programs such as DYANA or CNS. Results To test this modeling approach it has been used for predicting the structure of the histidine-containing phosphocarrier protein HPr from E. coli and the structure of the human peroxisome proliferator activated receptor γ (Ppar γ. The divergence between the modeled HPr and the previously determined X-ray structure was comparable to the divergence between the X-ray structure and the published NMR structure. The modeled structure of Ppar γ was also very close to the previously solved X-ray structure with an RMSD of 0.262 nm for the backbone atoms. Conclusion In summary, we present a new method for homology modeling capable of producing high-quality structure models. An advantage of the method is that it can be used in combination with incomplete NMR data to obtain reasonable structure models in accordance with the experimental data.
Automated integration of genomic physical mapping data via parallel simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Slezak, T.
1994-06-01
The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.
Lee, Cheng-Kuang
2014-12-10
© 2014 American Chemical Society. The nanomorphologies of the bulk heterojunction (BHJ) layer of polymer solar cells are extremely sensitive to the electrode materials and thermal annealing conditions. In this work, the correlations of electrode materials, thermal annealing sequences, and resultant BHJ nanomorphological details of P3HT:PCBM BHJ polymer solar cell are studied by a series of large-scale, coarse-grained (CG) molecular simulations of system comprised of PEDOT:PSS/P3HT:PCBM/Al layers. Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link between morphology and processing conditions. Our analysis indicates that vertical phase segregation of P3HT:PCBM blend strongly depends on the electrode material and thermal annealing schedule. A thin P3HT-rich film is formed on the top, regardless of bottom electrode material, when the BHJ layer is exposed to the free surface during thermal annealing. In addition, preferential segregation of P3HT chains and PCBM molecules toward PEDOT:PSS and Al electrodes, respectively, is observed. Detailed morphology analysis indicated that, surprisingly, vertical phase segregation does not affect the connectivity of donor/acceptor domains with respective electrodes. However, the formation of P3HT/PCBM depletion zones next to the P3HT/PCBM-rich zones can be a potential bottleneck for electron/hole transport due to increase in transport pathway length. Analysis in terms of fraction of intra- and interchain charge transports revealed that processing schedule affects the average vertical orientation of polymer chains, which may be crucial for enhanced charge transport, nongeminate recombination, and charge collection. The present study establishes a more detailed link between processing and morphology by combining multiscale molecular
Chen, Hongwei; Kong, Xi; Chong, Bo; Qin, Gan; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng
2011-03-01
The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schrödinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.
Hasegawa, M.
2011-03-01
The aim of the present study is to elucidate how simulated annealing (SA) works in its finite-time implementation by starting from the verification of its conventional optimization scenario based on equilibrium statistical mechanics. Two and one supplementary experiments, the design of which is inspired by concepts and methods developed for studies on liquid and glass, are performed on two types of random traveling salesman problems. In the first experiment, a newly parameterized temperature schedule is introduced to simulate a quasistatic process along the scenario and a parametric study is conducted to investigate the optimization characteristics of this adaptive cooling. In the second experiment, the search trajectory of the Metropolis algorithm (constant-temperature SA) is analyzed in the landscape paradigm in the hope of drawing a precise physical analogy by comparison with the corresponding dynamics of glass-forming molecular systems. These two experiments indicate that the effectiveness of finite-time SA comes not from equilibrium sampling at low temperature but from downward interbasin dynamics occurring before equilibrium. These dynamics work most effectively at an intermediate temperature varying with the total search time and thus this effective temperature is identified using the Deborah number. To test directly the role of these relaxation dynamics in the process of cooling, a supplementary experiment is performed using another parameterized temperature schedule with a piecewise variable cooling rate and the effect of this biased cooling is examined systematically. The results show that the optimization performance is not only dependent on but also sensitive to cooling in the vicinity of the above effec-tive temperature and that this feature is interpreted as a consequence of the presence or absence of the workable interbasin dynamics. It is confirmed for the present instances that the effectiveness of finite-time SA derives from the glassy relaxation
Hao, Ge-Fei; Xu, Wei-Fang; Yang, Sheng-Gang; Yang, Guang-Fu
2015-10-23
Protein and peptide structure predictions are of paramount importance for understanding their functions, as well as the interactions with other molecules. However, the use of molecular simulation techniques to directly predict the peptide structure from the primary amino acid sequence is always hindered by the rough topology of the conformational space and the limited simulation time scale. We developed here a new strategy, named Multiple Simulated Annealing-Molecular Dynamics (MSA-MD) to identify the native states of a peptide and miniprotein. A cluster of near native structures could be obtained by using the MSA-MD method, which turned out to be significantly more efficient in reaching the native structure compared to continuous MD and conventional SA-MD simulation.
Institute of Scientific and Technical Information of China (English)
MALi-ming; JIANGHong; WANGXiao-chun
2004-01-01
The algorithm is divided into two steps. The first step pre-locates the blank by aligning its centre of gravity and approximate normal vector with those of destination surfaces, with largest overlap of projections of two objects on a plane perpendicular to the normal vector. The second step is optimizing an objective function by means of gradient-simulated annealing algorithm to get the best matching of a set of distributed points on the blank and destination surfaces. An example for machining hydroelectric turbine blades is given to verify the effectiveness of algorithm.
Using the adaptive blockset for simulation and rapid prototyping
DEFF Research Database (Denmark)
Ravn, Ole
1999-01-01
The paper presents the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The basics of indirect adaptive controllers are summarized. The concept behind the Adaptive Blockset for Simulink is to bridge...... the gap between simulation and prototype controller implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally fund in adaptive controllers...
Directory of Open Access Journals (Sweden)
Óscar Begambre
2010-01-01
Full Text Available En este trabajo, el algoritmo Simulated Annealing (SA es empleado para solucionar el problema inverso de detección de daño en vigas usando información modal contaminada con ruido. La formulación de la función objetivo para el procedimiento de optimización, basado en el SA, está fundamentada en el método de la fuerza residual modificada. El desempeño del SA empleado en este estudio superó el de un algoritmo genético (AG en dos funciones de prueba reportadas en la literatura internacional. El procedimiento de evaluación de integridad aquí propuesto se confirmó y validó numéricamente empleando la teoría de vigas de Euler-Bernoulli y un Modelo de Elementos Finitos (MEF de vigas en voladizo y apoyadas libremente.In this research, the Simulated Annealing Algorithm (SA is employed to solve damage detection problems in beam type structures using noisy polluted modal data. The formulation of the objective function for the SA optimization procedure is based on the modified residual force method. The SA used in this research performs better than the Genetic Algorithm (GA in two difficult benchmark functions. The proposed structural damage-identification scheme is confirmed and assessed using a Finite Element Model (FEM of cantilever and a free-free Euler-Bernoulli beam model
Neuromuscular adaptation to actual and simulated weightlessness
Edgerton, V. R.; Roy, R. R.
1994-01-01
The chronic "unloading" of the neuromuscular system during spaceflight has detrimental functional and morphological effects. Changes in the metabolic and mechanical properties of the musculature can be attributed largely to the loss of muscle protein and the alteration in the relative proportion of the proteins in skeletal muscle, particularly in the muscles that have an antigravity function under normal loading conditions. These adaptations could result in decrements in the performance of routine or specialized motor tasks, both of which may be critical for survival in an altered gravitational field, i.e., during spaceflight and during return to 1 G. For example, the loss in extensor muscle mass requires a higher percentage of recruitment of the motor pools for any specific motor task. Thus, a faster rate of fatigue will occur in the activated muscles. These consequences emphasize the importance of developing techniques for minimizing muscle loss during spaceflight, at least in preparation for the return to 1 G after spaceflight. New insights into the complexity and the interactive elements that contribute to the neuromuscular adaptations to space have been gained from studies of the role of exercise and/or growth factors as countermeasures of atrophy. The present chapter illustrates the inevitable interactive effects of neural and muscular systems in adapting to space. It also describes the considerable progress that has been made toward the goal of minimizing the functional impact of the stimuli that induce the neuromuscular adaptations to space.
Energy Technology Data Exchange (ETDEWEB)
Nakos, J.T.; Rosinski, S.T.; Acton, R.U.
1994-11-01
The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.
Chaotic Simulated Annealing by A Neural Network Model with Transient Chaos
Chen, L; Chen, Luonan; Aihara, Kazuyuki
1997-01-01
We propose a neural network model with transient chaos, or a transiently chaotic neural network (TCNN) as an approximation method for combinatorial optimization problem, by introducing transiently chaotic dynamics into neural networks. Unlike conventional neural networks only with point attractors, the proposed neural network has richer and more flexible dynamics, so that it can be expected to have higher ability of searching for globally optimal or near-optimal solutions. A significant property of this model is that the chaotic neurodynamics is temporarily generated for searching and self-organizing, and eventually vanishes with autonomous decreasing of a bifurcation parameter corresponding to the "temperature" in usual annealing process. Therefore, the neural network gradually approaches, through the transient chaos, to dynamical structure similar to such conventional models as the Hopfield neural network which converges to a stable equilibrium point. Since the optimization process of the transiently chaoti...
DEFF Research Database (Denmark)
Ravn, Ole
1998-01-01
The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...
Directory of Open Access Journals (Sweden)
Doddy Kastanya
2017-02-01
Full Text Available In any reactor physics analysis, the instantaneous power distribution in the core can be calculated when the actual bundle-wise burnup distribution is known. Considering the fact that CANDU (Canada Deuterium Uranium utilizes on-power refueling to compensate for the reduction of reactivity due to fuel burnup, in the CANDU fuel management analysis, snapshots of power and burnup distributions can be obtained by simulating and tracking the reactor operation over an extended period using various tools such as the *SIMULATE module of the Reactor Fueling Simulation Program (RFSP code. However, for some studies, such as an evaluation of a conceptual design of a next-generation CANDU reactor, the preferred approach to obtain a snapshot of the power distribution in the core is based on the patterned-channel-age model implemented in the *INSTANTAN module of the RFSP code. The objective of this approach is to obtain a representative snapshot of core conditions quickly. At present, such patterns could be generated by using a program called RANDIS, which is implemented within the *INSTANTAN module. In this work, we present an alternative approach to derive the patterned-channel-age model where a simulated-annealing-based algorithm is used to find such patterns, which produce reasonable power distributions.
Energy Technology Data Exchange (ETDEWEB)
Fabbri, Paolo; Trevisani, Sebastiano [Dipartimento di Geologia, Paleontologia e Geofisica, Universita degli Studi di Padova, via Giotto 1, 35127 Padova (Italy)
2005-10-01
The spatial distribution of groundwater temperatures in the low-temperature (60-86{sup o}C) geothermal Euganean field of northeastern Italy has been studied using a geostatistical approach. The data set consists of 186 temperatures measured in a fractured limestone reservoir, over an area of 8km{sup 2}. Investigation of the spatial continuity by means of variographic analysis revealed the presence of anisotropies that are apparently related to the particular geologic structure of the area. After inference of variogram models, a simulated annealing procedure was used to perform conditional simulations of temperature in the domain being studied. These simulations honor the data values and reproduce the spatial continuity inferred from the data. Post-processing of the simulations permits an assessment of temperature uncertainties. Maps of estimated temperatures, interquartile range, and of the probability of exceeding a prescribed 80{sup o}C threshold were also computed. The methodology described could prove useful when siting new wells in a geothermal area. (author)
Directory of Open Access Journals (Sweden)
Shangchia Liu
2015-01-01
Full Text Available In the field of distributed decision making, different agents share a common processing resource, and each agent wants to minimize a cost function depending on its jobs only. These issues arise in different application contexts, including real-time systems, integrated service networks, industrial districts, and telecommunication systems. Motivated by its importance on practical applications, we consider two-agent scheduling on a single machine where the objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent. For solving the proposed problem, a branch-and-bound and three simulated annealing algorithms are developed for the optimal solution, respectively. In addition, the extensive computational experiments are also conducted to test the performance of the algorithms.
Indian Academy of Sciences (India)
Satyajit Guha; Soumya Ganguly Neogi; Pinaki Chaudhury
2014-05-01
In this paper, we explore the use of stochastic optimizer, namely simulated annealing (SA) followed by density function theory (DFT)-based strategy for evaluating the structure and infrared spectroscopy of (H2O) OH− clusters where = 1-6. We have shown that the use of SA can generate both global and local structures of these cluster systems.We also perform a DFT calculation, using the optimized coordinate obtained from SA as input and extract the IR spectra of these systems. Finally, we compare our results with available theoretical and experimental data. There is a close correspondence between the computed frequencies from our theoretical study and available experimental data. To further aid in understanding the details of the hydrogen bonds formed, we performed atoms in molecules calculation on all the global minimum structures to evaluate relevant electron densities and critical points.
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
ArF-excimer-laser annealing of 3C-SiC films—diode characteristics and numerical simulation
Mizunami, T.; Toyama, N.
2003-09-01
We fabricated Schottky barrier diodes using 3C-SiC films deposited on Si(1 1 1) by lamp-assisted thermal chemical vapor deposition and annealed with an ArF excimer laser. Improvement in both the reverse current and the ideality factor was obtained with 1-3 pulses with energy densities of 1.4- 1.6 J/cm2 per pulse. We solved a heat equation numerically assuming a transient liquid phase of SiC. The calculated threshold energy density for melting the surface was 0.9 J/cm2. The thermal effects of Si substrate on SiC film were also discussed. The experimental optimum condition was consistent the numerical simulation.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The characteristics of the design resources in the ship collaborative design is described and the hierarchical model for the evaluation of the design resources is established. The comprehensive evaluation of the co-designers for the collaborative design resources has been done from different aspects using Analytic Hierarchy Process (AHP),and according to the evaluation results,the candidates are determined. Meanwhile,based on the principle of minimum cost,and starting from the relations between the design tasks and the corresponding co-designers,the optimizing selection model of the collaborators is established and one novel genetic combined with simulated annealing algorithm is proposed to realize the optimization. It overcomes the defects of the genetic algorithm which may lead to the premature convergence and local optimization if used individually. Through the application of this method in the ship collaborative design system,it proves the feasibility and provides a quantitative method for the optimizing selection of the design resources.
Energy Technology Data Exchange (ETDEWEB)
Rincon, Luis [Universidad de Los Andes, Merida (Venezuela)
2001-03-01
Semiempirical simulated annealing molecular dynamics method using a fictitious Lagrangian has been developed for the study of structural and electronic properties of micro- and nano-clusters. As an application of the present scheme, we study the structure of Na{sub n} clusters in the range of n=2-100, and compared the present calculation with some ab-initio model calculation. [Spanish] Se desarrollo un metodo de Dinamica Molecular-Recocido simulado usando un Lagrangiano ficticio para estudiar las propiedades electronicas y estructurales de micro- y nano-agregados. Como una aplicacion del presente esquema, se estudio la estructura de agregados de Na{sub n} en el rango entre n=2-100, y se compararon los resultados con algunos calculos ab-initio modelo.
Adaptive LES Methodology for Turbulent Flow Simulations
Energy Technology Data Exchange (ETDEWEB)
Oleg V. Vasilyev
2008-06-12
Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic
Adaptive Mesh Fluid Simulations on GPU
Wang, Peng; Kaehler, Ralf
2009-01-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA par...
Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique
2016-12-01
Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner-Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.
PASSATA: object oriented numerical simulation software for adaptive optics
Agapito, G.; Puglisi, A.; Esposito, S.
2016-07-01
We present the last version of the PyrAmid Simulator Software for Adaptive opTics Arcetri (PASSATA), an IDL and CUDA based object oriented software developed in the Adaptive Optics group of the Arcetri observatory for Monte-Carlo end-to-end adaptive optics simulations. The original aim of this software was to evaluate the performance of a single conjugate adaptive optics system for ground based telescope with a pyramid wavefront sensor. After some years of development, the current version of PASSATA is able to simulate several adaptive optics systems: single conjugate, multi conjugate and ground layer, with Shack Hartmann and Pyramid wavefront sensors. It can simulate from 8m to 40m class telescopes, with diffraction limited and resolved sources at finite or infinite distance from the pupil. The main advantages of this software are the versatility given by the object oriented approach and the speed given by the CUDA implementation of the most computational demanding routines. We describe the software with its last developments and present some examples of application.
PASSATA - Object oriented numerical simulation software for adaptive optics
Agapito, G; Esposito, S
2016-01-01
We present the last version of the PyrAmid Simulator Software for Adaptive opTics Arcetri (PASSATA), an IDL and CUDA based object oriented software developed in the Adaptive Optics group of the Arcetri observatory for Monte-Carlo end-to-end adaptive optics simulations. The original aim of this software was to evaluate the performance of a single conjugate adaptive optics system for ground based telescope with a pyramid wavefront sensor. After some years of development, the current version of PASSATA is able to simulate several adaptive optics systems: single conjugate, multi conjugate and ground layer, with Shack Hartmann and Pyramid wavefront sensors. It can simulate from 8m to 40m class telescopes, with diffraction limited and resolved sources at finite or infinite distance from the pupil. The main advantages of this software are the versatility given by the object oriented approach and the speed given by the CUDA implementation of the most computational demanding routines. We describe the software with its...
Adaptive time steps in trajectory surface hopping simulations
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Energy Technology Data Exchange (ETDEWEB)
Liu, Bin, E-mail: bins@ieee.org [School of Computer Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210023 (China)
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
模糊自适应混合退火粒子滤波算法%THE ALGORITHM OF FUZZY ADAPTIVE HYBRID ANNEALED PARTICLE FILTER
Institute of Scientific and Technical Information of China (English)
蒋东明
2013-01-01
A new particle filter algorithm is proposed based on the hybrid annealed particle filter (HAPF) for on-line estimation of non-Gaussian nonlinear systems and inherent degeneracy problem of the particle filter.In the filtering algorithm,according to the relation between the statistical properties of state noise and measurement noise of the system,we introduce an adjustment factor,then an annealed coefficient is produced by fuzzy inference system.The state parameters separation and the annealed coefficient are used to produce important probability density function.Using the algorithm,we get better annealed coefficient on the basis of keeping the advantages of HAPF.Simulation experiments show that the performance of the proposed filtering algorithm outperforms the HAPF.%针对非线性、非高斯系统状态的在线估计问题,及粒子滤波本身固有的退化问题,在已提出的混合退火粒子滤波算法的基础上提出一种新的粒子滤波算法.在滤波算法中,根据系统的状态噪声统计特性和量测噪声统计特性的关系引入调整因子,再由模糊推理系统产生退火系数.用状态参数分解和退火系数来产生重要性概率密度函数.在保留原算法优点的基础上取得了更佳的退火系数.仿真实验表明该粒子滤波器的性能优于混合退火粒子滤波算法.
Institute of Scientific and Technical Information of China (English)
温平川; 徐晓东; 何先刚
2003-01-01
This paper presents a highly hybrid Genetic Algorithm / Simulated Annealing algorithm. This algorithmhas been successfully implemented on Beowulf PCs Cluster and applied to a set of standard function optimization prob-lems. From experimental results, it is easily to see that this algorithm proposed by us is not only effective but also robust.
Using Adaptive Simulated Annealing to Estimate Ocean Bottom Acoustic Properties from Acoustic Data
2007-11-02
0.01 0.1 1 Iterations Speed Sedi•,• (ml/sec) Density Sed,•, {gin/uc) Altený Sedr ,p• (dIli/?L) 1i 1 - 1Y10-, IN 1 o 1 io . .. . " . . . . lxii" l lxi°ii...IxtO IxiO xdO , lxio) Ixi0o ixiO L 1xio .xl&O . IteatinsSpeedl SedT,,, {in/seý,) Density Sedr ,,, jglli/cý) Ate .Sed",ý, (dl]/?L) xlO IxxO lxio’ I-. I
Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates
Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.
2016-03-01
We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.
The relative entropy is fundamental to adaptive resolution simulations
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Adaptive resolution simulation of an atomistic protein in MARTINI water
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej
2014-01-01
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coa
The behavior of adaptive bone-remodeling simulation models
H.H. Weinans (Harrie); R. Huiskes (Rik); H.J. Grootenboer
1992-01-01
textabstractThe process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Energy Technology Data Exchange (ETDEWEB)
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
Zhang, Jiapu
2013-01-01
Simulated annealing (SA) was inspired from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects, both are attributes of the material that depend on its thermodynamic free energy. In this Paper, firstly we will study SA in details on its practical implementation. Then, hybrid pure SA with local (or global) search optimization methods allows us to be able to design several effective and efficient global search optimization methods. In order to keep the original sense of SA, we clarify our understandings of SA in crystallography and molecular modeling field through the studies of prion amyloid fibrils.
Bello, A.; Laredo, E.; Grimau, M.
1999-11-01
The existence of a distribution of relaxation times has been widely used to describe the relaxation function versus frequency in glass-forming liquids. Several empirical distributions have been proposed and the usual method is to fit the experimental data to a model that assumes one of these functions. Another alternative is to extract from the experimental data the discrete profile of the distribution function that best fits the experimental curve without any a priori assumption. To test this approach a Monte Carlo algorithm using the simulated annealing is used to best fit simulated dielectric loss data, ɛ''(ω), generated with Cole-Cole, Cole-Davidson, Havriliak-Negami, and Kohlrausch-Williams-Watts (KWW) functions. The relaxation times distribution, G(ln(τ)), is obtained as an histogram that follows very closely the analytical expression for the distributions that are known in these cases. Also, the temporal decay functions, φ(t), are evaluated and compared to a stretched exponential. The method is then applied to experimental data for α-polyvinylidene fluoride over a temperature range 233 Kflouride (PVDF) is found to be 87, which characterizes this polymer as a relatively structurally strong material.
Frühwirth, R; Vanlaer, Pascal
2007-01-01
Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described, using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.
Adaptive deployment of model reductions for tau-leaping simulation
Wu, Sheng; Fu, Jin; Petzold, Linda R.
2015-05-01
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
Adaptive deployment of model reductions for tau-leaping simulation.
Wu, Sheng; Fu, Jin; Petzold, Linda R
2015-05-28
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Energy Technology Data Exchange (ETDEWEB)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Adaptive image ray-tracing for astrophysical simulations
Parkin, E R
2010-01-01
A technique is presented for producing synthetic images from numerical simulations whereby the image resolution is adapted around prominent features. In so doing, adaptive image ray-tracing (AIR) improves the efficiency of a calculation by focusing computational effort where it is needed most. The results of test calculations show that a factor of >~ 4 speed-up, and a commensurate reduction in the number of pixels required in the final image, can be achieved compared to an equivalent calculation with a fixed resolution image.
Selection for autochthonous bifidobacteial isolates adapted to simulated gastrointestinal fluid
Directory of Open Access Journals (Sweden)
H Jamalifar
2010-03-01
Full Text Available "nBackground and the purpose of the study: Bifidobacterial strains are excessively sensitive to acidic conditions and this can affect their living ability in the stomach and fermented foods, and as a result, restrict their use as live probiotic cultures. The aim of the present study was to obtain bifidobacterial isolates with augmented tolerance to simulated gastrointestinal condition using cross-protection method. "nMethods: Individual bifidobacterial strains were treated in acidic environment and also in media containing bile salts and NaCl. Viability of the acid and acid-bile-NaCl tolerant isolates was further examined in simulated gastric and small intestine by subsequent incubation of the probiotic bacteria in the corresponding media for 120 min. Antipathogenic activities of the adapted isolates were compared with those of the original strains. "nResults and major conclusion: The acid and acid-bile-NaCl adapted isolates showed improved viabilities significantly (p<0.05 in simulated gastric fluid compared to their parent strains. The levels of reduction in bacterial count (Log cfu/ml of the acid and acid-bile-NaCl adapted isolates obtained in simulated gastric fluid ranged from 0.64-3.06 and 0.36-2.43 logarithmic units after 120 min of incubation. There was no significant difference between the viability of the acid-bile-NaCl-tolerant isolates and the original strains in simulated small intestinal condition except for Bifidobacterium adolescentis (p<0.05. The presence of 15 ml of supernatants of acid-bile-NaCl-adapted isolates and also those of the initial Bifidobacterium strains inhibited pathogenic bacterial growth for 24 hrs. Probiotic bacteria with improved ability to survive in harsh gastrointestinal environment could be obtained by subsequent treatment of the strains in acid, bile salts and NaCl environments.
Directory of Open Access Journals (Sweden)
Felipe Baesler
2008-12-01
Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.
Energy Technology Data Exchange (ETDEWEB)
Setyawan, Wahyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Nandipati, Giridhar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Univ. of Washington, Seattle, WA (United States); Heinisch, Howard L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Kurtz, Richard J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780^{×}_{Ed}, where _{Ed} = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250^{×}_{Ed}. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5_{Tm}) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.
Hu, Kan-Nian; Qiang, Wei; Tycko, Robert
2011-01-01
We describe a general computational approach to site-specific resonance assignments in multidimensional NMR studies of uniformly 15N,13C-labeled biopolymers, based on a simple Monte Carlo/simulated annealing (MCSA) algorithm contained in the program MCASSIGN2. Input to MCASSIGN2 includes lists of multidimensional signals in the NMR spectra with their possible residue-type assignments (which need not be unique), the biopolymer sequence, and a table that describes the connections that relate one signal list to another. As output, MCASSIGN2 produces a high-scoring sequential assignment of the multidimensional signals, using a score function that rewards good connections (i.e., agreement between relevant sets of chemical shifts in different signal lists) and penalizes bad connections, unassigned signals, and assignment gaps. Examination of a set of high-scoring assignments from a large number of independent runs allows one to determine whether a unique assignment exists for the entire sequence or parts thereof. We demonstrate the MCSA algorithm using two-dimensional (2D) and three-dimensional (3D) solid state NMR spectra of several model protein samples (α-spectrin SH3 domain and protein G/B1 microcrystals, HET-s218–289 fibrils), obtained with magic-angle spinning and standard polarization transfer techniques. The MCSA algorithm and MCASSIGN2 program can accommodate arbitrary combinations of NMR spectra with arbitrary dimensionality, and can therefore be applied in many areas of solid state and solution NMR. PMID:21710190
Indian Academy of Sciences (India)
KAMAL DEEP; PARDEEP K SINGH
2016-09-01
In this paper, an integrated mathematical model of multi-period cell formation and part operation tradeoff in a dynamic cellular manufacturing system is proposed in consideration with multiple part process route. This paper puts emphasize on the production flexibility (production/subcontracting part operation) to satisfy the product demand requirement in different period segments of planning horizon considering production capacity shortage and/or sudden machine breakdown. The proposed model simultaneously generates machine cells and part families and selects the optimum process route instead of the user specifying predetermined routes. Conventional optimization method for the optimal cell formation problem requires substantial amount of time and memory space. Hence a simulated annealing based genetic algorithm is proposed to explore the solution regions efficiently and to expedite the solution search space. To evaluate the computability of the proposed algorithm, different problem scenarios are adopted from literature. The results approve the effectiveness of theproposed approach in designing the manufacturing cell and minimization of the overall cost, considering various manufacturing aspects such as production volume, multiple process route, production capacity, machine duplication, system reconfiguration, material handling and subcontracting part operation.
Directory of Open Access Journals (Sweden)
M. Madić
2013-09-01
Full Text Available This paper presents a systematic methodology for empirical modeling and optimization of surface roughness in nitrogen, CO2 laser cutting of stainless steel . The surface roughness prediction model was developed in terms of laser power , cutting speed , assist gas pressure and focus position by using The artificial neural network ( ANN . To cover a wider range of laser cutting parameters and obtain an experimental database for the ANN model development, Taguchi 's L27 orthogonal array was implemented in the experimental plan. The developed ANN model was expressed as an explicit nonlinear function , while the influence of laser cutting parameters and their interactions on surface roughness were analyzed by generating 2D and 3D plots . The final goal of the experimental study Focuses on the determinationof the optimum laser cutting parameters for the minimization of surface roughness . Since the solution space of the developed ANN model is complex, and the possibility of many local solutions is great, simulated annealing (SA was selected as a method for the optimization of surface roughness.
Institute of Scientific and Technical Information of China (English)
De-xuan ZOU; Gai-ge WANG; Gai PAN; Hong-wei QI
2016-01-01
Outline-free floorplanning focuses on area and wirelength reductions, which are usually meaningless, since they can hardly satisfy modern design requirements. We concentrate on a more difficult and useful issue, fixed-outline floorplanning. This issue imposes fixed-outline constraints on the outline-free floorplanning, making the physical design more interesting and chal-lenging. The contributions of this paper are primarily twofold. First, a modified simulated annealing (MSA) algorithm is proposed. In the beginning of the evolutionary process, a new attenuation formula is used to decrease the temperature slowly, to enhance MSA’s global searching capacity. After a period of time, the traditional attenuation formula is employed to decrease the temper-ature rapidly, to maintain MSA’s local searching capacity. Second, an excessive area model is designed to guide MSA to find feasible solutions readily. This can save much time for refining feasible solutions. Additionally, B*-tree representation is known as a very useful method for characterizing floorplanning. Therefore, it is employed to perform a perturbing operation for MSA. Finally, six groups of benchmark instances with different dead spaces and aspect ratios—circuits n10, n30, n50, n100, n200, and n300—are chosen to demonstrate the efficiency of our proposed method on fixed-outline floorplanning. Compared to several existing methods, the proposed method is more efficient in obtaining desirable objective function values associated with the chip area, wirelength, and fixed-outline constraints.
Directory of Open Access Journals (Sweden)
Silvia Gaona
2015-01-01
Full Text Available Censuses in Mexico are taken by the National Institute of Statistics and Geography (INEGI. In this paper a Two-Phase Approach (TPA to optimize the routes of INEGI’s census takers is presented. For each pollster, in the first phase, a route is produced by means of the Simulated Annealing (SA heuristic, which attempts to minimize the travel distance subject to particular constraints. Whenever the route is unrealizable, it is made realizable in the second phase by constructing a visibility graph for each obstacle and applying Dijkstra’s algorithm to determine the shortest path in this graph. A tuning methodology based on the irace package was used to determine the parameter values for TPA on a subset of 150 instances provided by INEGI. The practical effectiveness of TPA was assessed on another subset of 1962 instances, comparing its performance with that of the in-use heuristic (INEGIH. The results show that TPA clearly outperforms INEGIH. The average improvement is of 47.11%.
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Improved hybrid particle swarm algorithm based on simulated annealing%基于自适应模拟退火的改进混合粒子群算法
Institute of Scientific and Technical Information of China (English)
杨文光; 严哲; 隋丽丽
2015-01-01
为了改善旅行商(TSP)优化求解能力，对模拟退火与混合粒子群算法进行改进，引入了自适应寻优策略。交叉、变异的混合粒子群算法，易于陷入局部最优，而自适应的模拟退火算法可以跳出局部最优，进行全局寻优，所以两者的结合兼顾了全局和局部。该算法增加的自适应性寻优策略提供了判定粒子是否陷入局部极值的条件，并可借此以一定概率进行自适应寻优，增强了全局寻优能力。与混合粒子群算法实验结果对比，显示了本文算法的有效性。%In order to enhance the ability of solving TSP optimization, the hybrid particle swarm optimization (PSO) algorithm with simulated annealing is improved, which introduced the adaptive optimization strategy. Hybrid particle swarm optimization algorithm with crossover and mutation, is easy to fall into local optimum, and the simulated annealing algorithm can avoid local optimization, so the combination of both global and lo-cal. This algorithm increases the adaptive optimization strategy which provided to determine whether the parti-cles fall into local extreme conditions, and can be used to with a certain probability of adaptive optimization, enhanced the ability of global optimization. Compared with the hybrid particle swarm algorithm experimental results, shows the effectiveness of the proposed algorithm.
Disaster Rescue Simulation based on Complex Adaptive Theory
Directory of Open Access Journals (Sweden)
Feng Jiang
2013-05-01
Full Text Available Disaster rescue is one of the key measures of disaster reduction. The rescue process is a complex process with the characteristics of large scale, complicate structure, non-linear. It is hard to describe and analyze them with traditional methods. Based on complex adaptive theory, this paper analyzes the complex adaptation of the rescue process from seven features: aggregation, nonlinearity, mobility, diversity, tagging, internal model and building block. With the support of Repast platform, an agent-based model including rescue agents and victim agents was proposed. Moreover, two simulations with different parameters are employed to examine the feasibility of the model. As a result, the proposed model has been shown that it is efficient in dealing with the disaster rescue simulation and can provide the reference for making decisions.
A Model for Capturing Team Adaptation in Simulated Emergencies
DEFF Research Database (Denmark)
Paltved, Charlotte; Musaeus, Peter
2013-01-01
Research on how teams adapt to unforeseen changes or non-routine events supports the idea that updating is somehow difficult to accomplish.6,7 Methods: Thirty emergency physicians and nurses participated in a Simulator Instructor Course at SkejSim Medical Simulation and Skills Training, Aarhus, Denmark...... changes, adjust priorities and implement adjusted strategies were more likely to perform successfully in environments with unforeseen changes, in other words adaptability is the generalization of trained knowledge and skills to new, more difficult and more complex tasks. An interpretative approach...... is required to meaningfully account for communication exchanges in context. As such, this theoretical framework might provide a vocabulary for operationalizing the differences between "effective and ineffective" communication. Moving beyond counting communication events or the frequency of certain...
Adaptive quantum computation in changing environments using projective simulation
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-08-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
Energy Technology Data Exchange (ETDEWEB)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States); University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4 (Canada); Robertson, Douglas D., E-mail: douglas.d.robertson@emory.edu [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States)
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States)
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Moment Preserving Adaptive Particle Weighting Scheme for PIC Simulations
2012-07-01
Analytical Solution for Density, n(x, t) Crank-Nicolson Particle Simulations C-N is Stable and Non -Dissipative for Re(λ)=0 φ x av T E = T+φ = const. JEAN...Reproduces 3-4 Orders of Magnitude Random Merge -> Thermalization 3000 First Point, 1500 First Cross Bi- Maxwellian Specifically Difficult Octree Merge...3000 First Point, 1500 First Cross Bi- Maxwellian Specifically Difficult Octree Merge Significantly Better Merge & Split Adapts Particle Count Despite
Directory of Open Access Journals (Sweden)
Larry W. Burggraf
2013-07-01
Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Adaptive resolution simulation of an atomistic protein in MARTINI water
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej
2014-02-01
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.
An adaptive nonlinear solution scheme for reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms
Energy Technology Data Exchange (ETDEWEB)
Bosl, W J
2005-01-26
The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis
Adaptive Techniques for Clustered N-Body Cosmological Simulations
Menon, Harshitha; Zheng, Gengbin; Jetley, Pritish; Kale, Laxmikant; Quinn, Thomas; Governato, Fabio
2014-01-01
ChaNGa is an N-body cosmology simulation application implemented using Charm++. In this paper, we present the parallel design of ChaNGa and address many challenges arising due to the high dynamic ranges of clustered datasets. We focus on optimizations based on adaptive techniques for scaling to more than 128K cores. We demonstrate strong scaling on up to 512K cores of Blue Waters evolving 12 and 24 billion particles. We also show strong scaling of highly clustered datasets on up to 128K cores.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
DEFF Research Database (Denmark)
Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.;
2016-01-01
the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads......Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...
Institute of Scientific and Technical Information of China (English)
LIANG WEN-XI; ZHANG JING-JUAN; L(U) JUN-FENG; LIAO RUI
2001-01-01
We have designed a spatially quantized diffractive optical element (DOE) for controlling the beam profile in a three-dimensional space with the help of the simulated annealing (SA) algorithm. In this paper, we investigate the annealing schedule and the neighbourhood which are the deterministic parameters of the process that warrant the quality of the SA algorithm. The algorithm is employed to solve the discrete stochastic optimization problem of the design of a DOE. The objective function which constrains the optimization is also studied. The computed results demonstrate that the procedure of the algorithm converges stably to an optimal solution close to the global optimum with an acceptable computing time. The results meet the design requirement well and are applicable.
Institute of Scientific and Technical Information of China (English)
吴坤鸿; 詹世贤
2016-01-01
根据火力打击规则，建立了多目标函数的目标分配模型，提出了分布式遗传模拟退火算法对模型进行求解。分布式遗传模拟退火算法基于经典遗传算法进行改进：将单目标串行搜索方式变成多目标分布式搜索方式，适用于多目标寻优问题求解；采用保留最优个体和轮盘赌相结合的方式进行个体选择，在交叉算子中引入模拟退火算法，使用自适应变异概率，较好地保持算法广度和深度搜索平衡。最后，通过仿真实验验证了算法的有效性和可靠性。%According to the rules of fire strike,a target assignment model is presented,and a Distributed Genetic Simulated Annealing algorithm (DGSA)is applied to resolve this model. DGSA is improved based on classic Genetic Algorithm (GA)as below:the single object serial-searched mode is changed to multiple objects distributed-searched mode,which is fitter for resolving multiobjective optimization; in order to keep a better balance between exploration and exploitation of algorithm,a method by coupling best one preservation and roulette wheel is established for individual selection,and simulated annealing algorithm is combined into crossover operation,and self -adaptive mutation probability is applied. Finally,the efficiency and reliability of DGSA is verified by simulation experiment.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
Simulated annealing reveals the kinetic activity of SGLT1, a member of the LeuT structural family.
Longpré, Jean-Philippe; Sasseville, Louis J; Lapointe, Jean-Yves
2012-10-01
The Na(+)/glucose cotransporter (SGLT1) is the archetype of membrane proteins that use the electrochemical Na(+) gradient to drive uphill transport of a substrate. The crystal structure recently obtained for vSGLT strongly suggests that SGLT1 adopts the inverted repeat fold of the LeuT structural family for which several crystal structures are now available. What is largely missing is an accurate view of the rates at which SGLT1 transits between its different conformational states. In the present study, we used simulated annealing to analyze a large set of steady-state and pre-steady-state currents measured for human SGLT1 at different membrane potentials, and in the presence of different Na(+) and α-methyl-d-glucose (αMG) concentrations. The simplest kinetic model that could accurately reproduce the time course of the measured currents (down to the 2 ms time range) is a seven-state model (C(1) to C(7)) where the binding of the two Na(+) ions (C(4)→C(5)) is highly cooperative. In the forward direction (Na(+)/glucose influx), the model is characterized by two slow, electroneutral conformational changes (59 and 100 s(-1)) which represent reorientation of the free and of the fully loaded carrier between inside-facing and outside-facing conformations. From the inward-facing (C(1)) to the outward-facing Na-bound configuration (C(5)), 1.3 negative elementary charges are moved outward. Although extracellular glucose binding (C(5)→C(6)) is electroneutral, the next step (C(6)→C(7)) carries 0.7 positive charges inside the cell. Alignment of the seven-state model with a generalized model suggested by the structural data of the LeuT fold family suggests that electrogenic steps are associated with the movement of the so-called thin gates on each side of the substrate binding site. To our knowledge, this is the first model that can quantitatively describe the behavior of SGLT1 down to the 2 ms time domain. The model is highly symmetrical and in good agreement with the
Institute of Scientific and Technical Information of China (English)
王宏健; 王晶; 曲丽萍; 刘振业
2013-01-01
The FastSLAM algorithm based on variance reduction of particle weight was presented in order to solve the decrease of estimated accuracy of AUV ( autonomous underwater vehicle) , location due to particles degeneracy and the sample impoverishment as a result of resampling in standard FastSLAM. The variance of particle weight was decreased by generating an adaptive exponential fading factor, which came from the thought of cooling function in simulated annealing. The effective particle number was increased by application of FastSLAM based on simulated annealing variance reduction in navigation and localization of AUV. Resampling in standard FastSLAM was replaced with it. Establish the kinematic model of AUV, feature model and measurement models of sensors, and make feature extraction with Hough transform. The experiment of AUV's simultaneous localization and mapping u-sing simulated annealing variance reduction FastSLAM was based on trial data. The results indicate that the method described in this paper maintains the diversity of the particles, however, weakens the degeneracy, while at the same time enhances the accuracy stability of AUV's navigation and localization system.%由于标准FastSLAM中存在粒子退化及重采样引起的粒子贫化,导致自主水下航行器(AUV)位置估计精度严重下降的问题,提出了一种基于粒子权值方差缩减的FastSLAM算法.利用模拟退火的降温函数产生自适应指数渐消因子来降低粒子权值的方差,进而增加有效粒子数,以此取代标准FastSLAM中的重采样步骤.建立AUV的运动学模型、特征模型及传感器的测量模型,通过霍夫变换进行特征提取.利用方差缩减FastSLAM算法,基于海试数据进行了AUV同步定位与构图仿真试验,结果表明所提方法能够保证粒子的多样性,并且降低粒子的退化程度,提高了AUV定位与地图构建系统的准确性及稳定性.
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin; Wang, Da
2016-03-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined and used to derive conditions for when and where to apply reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5-50 times for model reduction level between 70-95 %.
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin
2015-01-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined used for deriving and conditions for when and where to apply model reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5 - 50 times for model reduction level between 70 - 95 %.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.
2016-09-01
Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.
Directory of Open Access Journals (Sweden)
Kumar Deepak
2015-12-01
Full Text Available Groundwater contamination due to leakage of gasoline is one of the several causes which affect the groundwater environment by polluting it. In the past few years, In-situ bioremediation has attracted researchers because of its ability to remediate the contaminant at its site with low cost of remediation. This paper proposed the use of a new hybrid algorithm to optimize a multi-objective function which includes the cost of remediation as the first objective and residual contaminant at the end of the remediation period as the second objective. The hybrid algorithm was formed by combining the methods of Differential Evolution, Genetic Algorithms and Simulated Annealing. Support Vector Machines (SVM was used as a virtual simulator for biodegradation of contaminants in the groundwater flow. The results obtained from the hybrid algorithm were compared with Differential Evolution (DE, Non Dominated Sorting Genetic Algorithm (NSGA II and Simulated Annealing (SA. It was found that the proposed hybrid algorithm was capable of providing the best solution. Fuzzy logic was used to find the best compromising solution and finally a pumping rate strategy for groundwater remediation was presented for the best compromising solution. The results show that the cost incurred for the best compromising solution is intermediate between the highest and lowest cost incurred for other non-dominated solutions.
Salter, Bill Jean, Jr.
Purpose. The advent of new, so called IVth Generation, external beam radiation therapy treatment machines (e.g. Scanditronix' MM50 Racetrack Microtron) has raised the question of how the capabilities of these new machines might be exploited to produce extremely conformal dose distributions. Such machines possess the ability to produce electron energies as high as 50 MeV and, due to their scanned beam delivery of electron treatments, to modulate intensity and even energy, within a broad field. Materials and methods. Two patients with 'challenging' tumor geometries were selected from the patient archives of the Cancer Therapy and Research Center (CTRC), in San Antonio Texas. The treatment scheme that was tested allowed for twelve, energy and intensity modulated beams, equi-spaced about the patient-only intensity was modulated for the photon treatment. The elementary beams, incident from any of the twelve allowed directions, were assumed parallel, and the elementary electron beams were modeled by elementary beam data. The optimal arrangement of elementary beam energies and/or intensities was optimized by Szu-Hartley Fast Simulated Annealing Optimization. Optimized treatment plans were determined for each patient using both the high energy, intensity and energy modulated electron (HIEME) modality, and the 6 MV photon modality. The 'quality' of rival plans were scored using three different, popular objective functions which included Root Mean Square (RMS), Maximize Dose Subject to Dose and Volume Limitations (MDVL - Morrill et. al.), and Probability of Uncomplicated Tumor Control (PUTC) methods. The scores of the two optimized treatments (i.e. HIEME and intensity modulated photons) were compared to the score of the conventional plan with which the patient was actually treated. Results. The first patient evaluated presented a deeply located target volume, partially surrounding the spinal cord. A healthy right kidney was immediately adjacent to the tumor volume, separated
Computational Simulation of Hypervelocity Penetration Using Adaptive SPH Method
Institute of Scientific and Technical Information of China (English)
QIANG Hongfu; MENG Lijun
2006-01-01
The normal hypervelocity impact of an Al-thin plate by an Al-sphere was numerically simulated by using the adaptive smoothed particle hydrodynamics (ASPH) method.In this method,the isotropic smoothing algorithm of standard SPH is replaced with anisotropic smoothing involving ellipsoidal kernels whose axes evolve automatically to follow the mean particle spacing as it varies in time,space,and direction around each particle.Using the ASPH,the anisotropic volume changes under strong shock condition are captured more accurately and clearly.The sophisticated features of meshless and Lagrangian nature inherent in the SPH method are kept for treating large deformations,large inhomogeneities and tracing free surfaces in the extremely transient impact process.A two-dimensional ASPH program is coded with C + +.The developed hydrocode is examined for example problems of hypervelocity impacts of solid materials.The results obtained from the numerical simulation are compared with available experimental ones.Good agreement is observed.
融合模拟退火策略的萤火虫优化算法%Glowworm swarm optimization algorithm merging simulated annealing strategy
Institute of Scientific and Technical Information of China (English)
曹秀爽
2014-01-01
Artificial glowworm swarm optimization algorithm is a new research orientation in the field of swarm intel igence recently.The algorithm has achieved success in the complex function optimization,but it is easy to fal into local optimum,and has the low speed of convergence in the later period and so on.Simulated annealing algorithm has excel ent global search ability.Combi-ning their advantages,an improved glowworm swarm optimization algorithm was proposed based on simulated annealing strategy.The simulated annealing strategy was integrated into the process of glowworm swarm optimization algorithm.And the temper strategy was integrated into the local search process of hybrid algorithm to improve search precision.Overal performance of the Glowworm swarm optimization was improved.Simulation results show that the hybrid algo-rithm increases the accuracy of solution and the speed of convergence significantly,and is a fea-sible and effective method.%萤火虫算法是群智能领域近年出现的一个新的研究方向，该算法虽已在复杂函数优化方面取得了成功，但也存在着易于陷入局部最优且进化后期收敛速度慢等问题，而模拟退火机制具有很强的全局搜索能力，结合两者的优缺点，提出一种融合模拟退火策略的萤火虫优化算法。改进后的算法在萤火虫算法全局搜索过程中融入模拟退火搜索机制，在局部搜索过程中采用了回火策略，改善寻优精度，改进了萤火虫算法的全局搜索性能和局部搜索性能。仿真实验结果表明：改进后的算法在收敛速度和解的精度方面有了显著地提高，证明了算法改进的可行性和有效性。
Simulating adaptive wood harvest in a changing climate
Yousefpour, Rasoul; Nabel, Julia; Pongratz, Julia
2016-04-01
The world's forest experience substantial carbon exchange fluxes between land and atmosphere. Large carbon sinks occur in response to changes in environmental conditions (such as climate change and increased atmospheric CO2 concentrations), removing about one quarter of current anthropogenic CO2-emissions. Large sinks also occur due to regrowth of forest on areas of agricultural abandonment or forest management. Forest management, on the other hand, also leads to substantial amounts of carbon being eventually released to the atmosphere. Both sinks and sources attributable to forests are therefore dependent on the intensity of management. Forest management in turn depends on the availability of resources, which is influenced by environmental conditions and sustainability of management systems applied. Estimating future carbon fluxes therefore requires accounting for the interaction of environmental conditions, forest growth, and management. However, this interaction is not fully captured by current modeling approaches: Earth system models depict in detail interactions between climate, the carbon cycle, and vegetation growth, but use prescribed information on management. Resource needs and land management, however, are simulated by Integrated Assessment Models that typically only have coarse representations of the influence of environmental changes on vegetation growth and are typically based on the demand for wood driven by regional population growth and energy needs. Here we present a study that provides the link between environmental conditions, forest growth and management. We extend the land component JSBACH of the Max Planck Institute's Earth system model (MPI-ESM) to simulate potential wood harvest in response to altered growth conditions and thus as adaptive to changing climate and CO2 conditions. We apply the altered model to estimate potential wood harvest for future climates (representative concentration pathways, RCPs) for the management scenario of
Institute of Scientific and Technical Information of China (English)
刘万辉; 田树军; 贾春强; 曹宇宁
2008-01-01
This paper establishes a mathematical model of multi-objective optimization with behavior constraints in solid space based on the problem of optimal design of hydraulic manifold blocks (HMB). Due to the limitation of its local search ability of genetic algorithm (GA) in solving a massive combinatorial optimization problem, simulated annealing (SA) is combined, the multi-parameter concatenated coding is adopted, and the memory function is added. Thus a hybrid genetic-simulated annealing with memory function is formed. Examples show that the modified algorithm can improve the local search ability in the solution space, and the solution quality.
Directory of Open Access Journals (Sweden)
Bjelić Mišo B.
2016-01-01
Full Text Available Simulation models of welding processes allow us to predict influence of welding parameters on the temperature field during welding and by means of temperature field and the influence to the weld geometry and microstructure. This article presents a numerical, finite-difference based model of heat transfer during welding of thin sheets. Unfortunately, accuracy of the model depends on many parameters, which cannot be accurately prescribed. In order to solve this problem, we have used simulated annealing optimization method in combination with presented numerical model. This way, we were able to determine uncertain values of heat source parameters, arc efficiency, emissivity and enhanced conductivity. The calibration procedure was made using thermocouple measurements of temperatures during welding for P355GH steel. The obtained results were used as input for simulation run. The results of simulation showed that represented calibration procedure could significantly improve reliability of heat transfer model. [National CEEPUS Office of Czech Republic (project CIII-HR-0108-07-1314 and to the Ministry of Education and Science of the Republic of Serbia (project TR37020
Energy Technology Data Exchange (ETDEWEB)
Joseph, Joby; Muthukumaran, S. [National Institute of Technology, Tamil Nadu (India)
2016-01-15
Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters.
Directory of Open Access Journals (Sweden)
Bailing Liu
2015-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.
Directory of Open Access Journals (Sweden)
Banani Basu
2010-05-01
Full Text Available In this paper, we propose a technique based on two evolutionary algorithms simulated annealing and particle swarm optimization to design a linear array of half wavelength long parallel dipole antennas that will generate a pencil beam in the horizontal plane with minimum standing wave ratio (SWR and fixed side lobe level (SLL. Dynamic range ratio of current amplitude distribution is kept at a fixed value. Two different methods have been proposed withdifferent inter-element spacing but with same current amplitude distribution. First one uses a fixed geometry and optimizes the excitation distribution on it. In the second case further reduction of SWR is done via optimization of interelement spacing while keeping the amplitude distribution same as before. Coupling effect between the elements is analyzed using induced EMF method and minimized interms of SWR. Numerical results obtained from SA are validated by comparing with results obtained using PSO.
Sali, A; Blundell, T L
1990-03-20
A protein is defined as an indexed string of elements at each level in the hierarchy of protein structure: sequence, secondary structure, super-secondary structure, etc. The elements, for example, residues or secondary structure segments such as helices or beta-strands, are associated with a series of properties and can be involved in a number of relationships with other elements. Element-by-element dissimilarity matrices are then computed and used in the alignment procedure based on the sequence alignment algorithm of Needleman & Wunsch, expanded by the simulated annealing technique to take into account relationships as well as properties. The utility of this method for exploring the variability of various aspects of protein structure and for comparing distantly related proteins is demonstrated by multiple alignment of serine proteinases, aspartic proteinase lobes and globins.
Directory of Open Access Journals (Sweden)
Yanhui Li
2013-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Energy Technology Data Exchange (ETDEWEB)
Estevez H, O.; Duque, J. [Universidad de La Habana, Instituto de Ciencia y Tecnologia de Materiales, 10400 La Habana (Cuba); Rodriguez H, J. [UNAM, Instituto de Investigaciones en Materiales, 04510 Mexico D. F. (Mexico); Yee M, H., E-mail: oestevezh@yahoo.com [Instituto Politecnico Nacional, Escuela Superior de Fisica y Matematicas, 07738 Mexico D. F. (Mexico)
2015-07-01
1-Furoyl-3,3-diphenylthiourea (FDFT) was synthesized, and characterized by Ftir, {sup 1}H and {sup 13}C NMR and ab initio X-ray powder structure analysis. FDFT crystallizes in the monoclinic space group P2{sub 1} with a = 12.691(1), b = 6.026(2), c = 11.861(1) A, β = 117.95(2) and V = 801.5(3) A{sup 3}. The crystal structure has been determined from laboratory X-ray powder diffraction data using direct space global optimization strategy (simulated annealing) followed by the Rietveld refinement. The thiourea group makes a dihedral angle of 73.8(6) with the furoyl group. In the crystal structure, molecules are linked by van der Waals interactions, forming one-dimensional chains along the a axis. (Author)
Web Mining Based on Hybrid Simulated Annealing Genetic Algorithm and HMM%基于混合模拟退火-遗传算法和HMM的Web挖掘
Institute of Scientific and Technical Information of China (English)
邹腊梅; 龚向坚
2012-01-01
The training algorithm which is used to training HMM is a sub-optimal algorithm and sensitive to initial parameters. Typical hidden Markov model often leads to sub-optimal when training it with random parameters. It is ineffective when mining Web information with typical HMM. GA has the excellent ability of global searching and has the defect of slow convergence rate. SA has the excellent ability of local searching and has the defect of randomly roaming. It combines the advantages of genetic algorithm and simulated annealing algorithm .proposes hybrid simulated annealing genetic algorithm (SGA). SGA chooses the best SGA parameters by experiment and optimizes HMM combining Baum-Welch during the course of Web mining. The experimental results show that the SGA significantly improves the performance in precision and recall.%隐马尔可夫模型训练算法是一种局部搜索算法,对初值敏感.传统方法采用随机参数训练隐马尔可夫模型时常陷入局部最优,应用于Web挖掘效果不佳.遗传算法具有较强的全局搜索能力,但容易早熟、收敛慢,模拟退火算法具有较强的局部寻优能力,但会随机漫游,全局搜索能力欠缺.综合考虑遗传算法和模拟退火算法的特点,提出混合模拟退火-遗传算法SGA,优化HMM初始参数,弥补Baum-Welch算法对初始参数敏感的缺陷,Web挖掘的实验结果表明五个域提取的REC和PRE都有明显的提高.
模拟退火蚁群算法求解二次分配问题%Simulated annealing ant colony algorithm for QAP.
Institute of Scientific and Technical Information of China (English)
朱经纬; 芮挺; 蒋新胜; 张金林
2011-01-01
A simulated annealing ant colony algorithm is presented to tackle the Quadratic Assignment Problem(QAP).The simulated annealing method is introduced to the ant colony algorithm.By setting the temperature which changes with the iterative,after each turn of circuit,the solution set got by the colony is taken as the candidate set,the update set is gotten by accepting the solutions in the candidate set with the probability determined by the temperature.The candidate set is used to update the trail information matrix.In each turn of updating the tail information,the best solution is used to enhance the tail information.The tail information matrix is reset when the algorithm is in stagnation.The computer experiments demonstrate this algorithm has high calculation stability and converging speed.%提出了一种求解二次分配问题的模拟退火蚁群算法.将模拟退火机制引入蚁群算法,在算法中设定随迭代变化的温度,将蚁群根据信息素矩阵搜索得到的解集作为候选集,根据当前温度按照模拟退火机制由候选集生成更新集,利用更新集更新信息素矩阵,并利用当前最优解对信息素矩阵进行强化.当算法出现停滞对信息素矩阵进行重置.实验表明,该算法有着高的稳定性与收敛速度.
Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.
2016-05-01
Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.
Directory of Open Access Journals (Sweden)
Marco A. C. Benvenga
2011-10-01
Full Text Available Kinetic simulation and drying process optimization of corn malt by Simulated Annealing (SA for estimation of temperature and time parameters in order to preserve maximum amylase activity in the obtained product are presented here. Germinated corn seeds were dried at 54-76 °C in a convective dryer, with occasional measurement of moisture content and enzymatic activity. The experimental data obtained were submitted to modeling. Simulation and optimization of the drying process were made by using the SA method, a randomized improvement algorithm, analogous to the simulated annealing process. Results showed that seeds were best dried between 3h and 5h. Among the models used in this work, the kinetic model of water diffusion into corn seeds showed the best fitting. Drying temperature and time showed a square influence on the enzymatic activity. Optimization through SA showed the best condition at 54 ºC and between 5.6h and 6.4h of drying. Values of specific activity in the corn malt were found between 5.26±0.06 SKB/mg and 15.69±0,10% of remaining moisture.Este trabalho objetivou a simulação da cinética e a otimização do processo de secagem do malte de milho por meio da técnica Simulated Annealing (SA, para estimação dos parâmetros de temperatura e tempo, tais que mantenham a atividade máxima das enzimas amilases no produto obtido. Para tanto, as sementes de milho germinadas foram secas entre 54-76°C, em um secador convectivo de ar. De tempo em tempo, a umidade e a atividade enzimática foram medidas. Esses dados experimentais foram usados para testar os modelos. A simulação e a otimização do processo foram feitas por meio do método SA, um algoritmo de melhoria randômica, análogo ao processo de têmpera simulada. Os resultados mostram que as sementes estavam secas após 3 h ou 5 h de secagem. Entre os modelos usados, o modelo cinético de difusão da água através das sementes apresentou o melhor ajuste. O tempo e a temperatura
Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Liang, Faming
2010-04-08
In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.
Adaptive Training Considerations for Use in Simulation-Based Systems
2010-09-01
Specifically, their definition suggests that adaptation of instructional content can occur as a result of user traits or characteristics, as well as...typically adapted on task and state variables (e.g., anxiety or fatigue) rather than trait variables. It is critically important that formal...partial and non-AT (Tennyson & Rothen, 1977). Trainees also showed an increase in motor skills with AT ( Cote , Williges, & Williges, 1981; Johnson
Balanced adaptive simulation of pollutant transport in Bay of Tangier
2014-01-01
A balanced adaptive scheme is proposed for the numerical solution of the coupled non-linear shallow water equations and depth-averaged advection-diffusion pollutant transport equation. The scheme uses the Roe approximate Riemann solver with centred discretization for advection terms and the Vazquez scheme for source terms. It is designed to handle non-uniform bed topography on triangular unstructured meshes, while satisfying the conservation property. Dynamic mesh adaptation criteria are base...
Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.
The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the
Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej
2016-01-01
The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to
Stauffer, D.; Arndt, H.
Can unicellular organisms survive a drastic temperature change, and adapt to it after many generations? In simulations of the Penna model of biological aging, both extinction and adaptation were found for asexual and sexual reproduction as well as for parasex. These model investigations are the basis for the design of evolution experiments with heterotrophic flagellates.
Developing adaptive user interfaces using a game-based simulation environment
Brake, G.M. te; Greef, T.E. de; Lindenberg, J.; Rypkema, J.A.; Smets-Noor, N.J.J.M.
2006-01-01
In dynamic settings, user interfaces can provide more optimal support if they adapt to the context of use. Providing adaptive user interfaces to first responders may therefore be fruitful. A cognitive engineering method that incorporates development iterations in both a simulated and a real-world en
Parallel Mesh Adaptive Techniques for Complex Flow Simulation: Geometry Conservation
Directory of Open Access Journals (Sweden)
Angelo Casagrande
2012-01-01
Full Text Available Dynamic mesh adaptation on unstructured grids, by localised refinement and derefinement, is a very efficient tool for enhancing solution accuracy and optimising computational time. One of the major drawbacks, however, resides in the projection of the new nodes created, during the refinement process, onto the boundary surfaces. This can be addressed by the introduction of a library capable of handling geometric properties given by a CAD (computer-aided design description. This is of particular interest also to enhance the adaptation module when the mesh is being smoothed, and hence moved, to then reproject it onto the surface of the exact geometry.
Computer simulation program is adaptable to industrial processes
Schultz, F. E.
1966-01-01
The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.
The adaptation method in the Monte Carlo simulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)
2015-06-15
The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
Energy Technology Data Exchange (ETDEWEB)
Fonville, Judith M., E-mail: j.fonville07@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Bylesjoe, Max, E-mail: max.bylesjo@almacgroup.com [Almac Diagnostics, 19 Seagoe Industrial Estate, Craigavon BT63 5QD (United Kingdom); Coen, Muireann, E-mail: m.coen@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Nicholson, Jeremy K., E-mail: j.nicholson@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Holmes, Elaine, E-mail: elaine.holmes@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Lindon, John C., E-mail: j.lindon@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Rantalainen, Mattias, E-mail: rantalai@stats.ox.ac.uk [Department of Statistics, Oxford University, 1 South Parks Road, Oxford OX1 3TG (United Kingdom)
2011-10-31
Highlights: {yields} Non-linear modeling of metabonomic data using K-OPLS. {yields} automated optimization of the kernel parameter by simulated annealing. {yields} K-OPLS provides improved prediction performance for exemplar spectral data sets. {yields} software implementation available for R and Matlab under GPL v2 license. - Abstract: Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and
Photovoltaic Power Prediction Based on Scene Simulation Knowledge Mining and Adaptive Neural Network
Directory of Open Access Journals (Sweden)
Dongxiao Niu
2013-01-01
Full Text Available Influenced by light, temperature, atmospheric pressure, and some other random factors, photovoltaic power has characteristics of volatility and intermittent. Accurately forecasting photovoltaic power can effectively improve security and stability of power grid system. The paper comprehensively analyzes influence of light intensity, day type, temperature, and season on photovoltaic power. According to the proposed scene simulation knowledge mining (SSKM technique, the influencing factors are clustered and fused into prediction model. Combining adaptive algorithm with neural network, adaptive neural network prediction model is established. Actual numerical example verifies the effectiveness and applicability of the proposed photovoltaic power prediction model based on scene simulation knowledge mining and adaptive neural network.
Enzo+Moray: Radiation Hydrodynamics Adaptive Mesh Refinement Simulations with Adaptive Ray Tracing
Wise, John H
2010-01-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray tracing scheme, and its parallel implementation into the adaptive mesh refinement (AMR) cosmological hydrodynamics code, Enzo. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilised to study a broad range of astrophysical problems, such as stellar and black hole (BH) feedback. Inaccuracies can arise from large timesteps and poor sampling, therefore we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. (2006, 2009). We further test our method with more dynamical situations, for example, the propagation of an ionisation front through a Rayleigh-Taylor instability, time-varying luminosities, and collimated radiation. The test suite also includes an...
Institute of Scientific and Technical Information of China (English)
赵敬和; 谢玲
2011-01-01
针对旅行商问题（TSP）具有的易于描述却难以处理的NP完全难题、其可能的路径数目与城市数目是呈指数型增长的、求解困难的特点。本文首次采用LabVIEW仿真实现模拟退火算法来求解该问题。仿真结果表明LabVIEW独有的数组运算规则可有效的实现该算法求解TSP问题，相比较其它方法，该方法更简单、实用、计算精度高、速度快，并且适合任意城市数目的TSP问题。%For the NP-complete hard problem which is easy to be described,but hard to be solved and the possible amounts of path increase exponentially with the amounts of city in Traveling Salesman Problem ,both resulting TSP is difficult to solve,this paper uses Simulated Annealing based on LabVIEW simulation to solve the problem for the first time. LabVIEW simulation results show that its unique array algorithms can effectively implement the Simulated annealing for TSP.Compared to other methods,this method is more simple,more practical and more precise.In addition, it has higher speed and is suitable for the TSP with any number of cities.
Fuzzy Backstepping Torque Control Of Passive Torque Simulator With Algebraic Parameters Adaptation
Ullah, Nasim; Wang, Shaoping; Wang, Xingjian
2015-07-01
This work presents fuzzy backstepping control techniques applied to the load simulator for good tracking performance in presence of extra torque, and nonlinear friction effects. Assuming that the parameters of the system are uncertain and bounded, Algebraic parameters adaptation algorithm is used to adopt the unknown parameters. The effect of transient fuzzy estimation error on parameters adaptation algorithm is analyzed and the fuzzy estimation error is further compensated using saturation function based adaptive control law working in parallel with the actual system to improve the transient performance of closed loop system. The saturation function based adaptive control term is large in the transient time and settles to an optimal lower value in the steady state for which the closed loop system remains stable. The simulation results verify the validity of the proposed control method applied to the complex aerodynamics passive load simulator.
Wavelet-Based Adaptive Solvers on Multi-core Architectures for the Simulation of Complex Systems
Rossinelli, Diego; Bergdorf, Michael; Hejazialhosseini, Babak; Koumoutsakos, Petros
We build wavelet-based adaptive numerical methods for the simulation of advection dominated flows that develop multiple spatial scales, with an emphasis on fluid mechanics problems. Wavelet based adaptivity is inherently sequential and in this work we demonstrate that these numerical methods can be implemented in software that is capable of harnessing the capabilities of multi-core architectures while maintaining their computational efficiency. Recent designs in frameworks for multi-core software development allow us to rethink parallelism as task-based, where parallel tasks are specified and automatically mapped into physical threads. This way of exposing parallelism enables the parallelization of algorithms that were considered inherently sequential, such as wavelet-based adaptive simulations. In this paper we present a framework that combines wavelet-based adaptivity with the task-based parallelism. We demonstrate good scaling performance obtained by simulating diverse physical systems on different multi-core and SMP architectures using up to 16 cores.
Simulating Computer Adaptive Testing With the Mood and Anxiety Symptom Questionnaire
G. Flens; N. Smits; I. Carlier; A.M. van Hemert; E. de Beurs
2015-01-01
In a post hoc simulation study (N = 3,597 psychiatric outpatients), we investigated whether the efficiency of the 90-item Mood and Anxiety Symptom Questionnaire (MASQ) could be improved for assessing clinical subjects with computerized adaptive testing (CAT). A CAT simulation was performed on each o
Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.
2016-07-01
The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.
Directory of Open Access Journals (Sweden)
ASHWIN MISHRA,
2011-01-01
Full Text Available In this study singularity analysis of the six degree of freedom (DOF Stewart Platform using the various heuristic methods in a specified design configuration has been carried out .The Jacobian matrix of the Stewart platform is obtained and the absolute value of the determinant of the Jacobian is taken as the objective function, and the least value of this objective function is fished in the reachable workspace of the Stewart platform so as to find the singular configurations. The singular configurations of the platform depend on the value of this objective function under consideration, if it is zero the configuration is singular. The results thus obtained by different methods namely the genetic algorithm, Particle Swarm optimization and variants and simulated annealing are compared with each other. The variable sets considered are the respective desirable platform motions in the form of translation and rotation in six degrees of freedom. This paper hence presents a proper comparative study of these algorithms based on the results that are obtained and highlights the advantage of each in terms of computational cost and accuracy.
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
Institute of Scientific and Technical Information of China (English)
齐小刚; 王云鹤
2011-01-01
为解决Hopfield神经网络应用过程中参数设置的问题,在研究Hopfield神经网络的工作原理的基础上,分析了神经网络模型在求解TSP(Traveling Salesman Problem)问题过程中参数的选取,通过对输出数据进行归一化处理建立网络的评价函数,然后引入模拟退火算法对参数进行最优化选取.实验结果表明,经过参数优化过的Hopfield神经网络模型能更有效,更快速地得到TSP问题的最优解.%In order to solve the parameter setting problem during the application process of Hopfield neural network. The working principle of Hopfield neural network is described, the neural network model parameter selection problem in the TSP ( Traveling Salesman Problem) problems solving process is analyed On the basis established the evaluation function of network by using normalized on output data, and then use simulated annealing algorithm to select the optimal parameters. The results show that, after optimization of parameters, Hopfield neural network can obtain the optimal solution of TSP problems more effective and more quickly.
Directory of Open Access Journals (Sweden)
Yu Lin
2015-01-01
Full Text Available In recent years, logistics systems with multiple suppliers and plants in neighboring regions have been flourishing worldwide. However, high logistics costs remain a problem for such systems due to lack of information sharing and cooperation. This paper proposes an extended mathematical model that minimizes transportation and pipeline inventory costs via the many-to-many Milk-run routing mode. Because the problem is NP hard, a two-stage heuristic algorithm is developed by comprehensively considering its characteristics. More specifically, an initial satisfactory solution is generated in the first stage through a greedy heuristic algorithm to minimize the total number of vehicle service nodes and the best insertion heuristic algorithm to determine each vehicle’s route. Then, a simulated annealing algorithm (SA with limited search scope is used to improve the initial satisfactory solution. Thirty numerical examples are employed to test the proposed algorithms. The experiment results demonstrate the effectiveness of this algorithm. Further, the superiority of the many-to-many transportation mode over other modes is demonstrated via two case studies.
Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C
2015-01-01
The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature.
基于模拟退火算法的全国最优旅行方案%Optimal Nationwide Traveling Scheme Based on Simulated Annealing Algorithm
Institute of Scientific and Technical Information of China (English)
吕鹏举; 原杰; 吕菁华
2011-01-01
An optimal itinerary scheme to travel through provincial capitals, municipalities, Hong Kong, Macao, Taiwan is designed.The practical problems of the shortest path and least cost for travelling to the above places are analyzed.Taking account of the relationship of cost, route, duration and transportation, a model is established.The simulated annealing algorithm is adopted to solve the model.A travel path of saving money and time is obtained by a comprehensive consideration.The results show the correctness of this travel program and practical value.%以如何走遍全国各省会、直辖市、香港、澳门、台北为基础设计旅行方案,对旅行时的路径最短,费用最少等现实问题进行分析,在充分考虑旅行费用与路线,时间与交通工具的关系后,以实现路径最短与费用时间最少为目标,进行系统建模,并应用模拟退火算法对模型进行求解,得出了一条综合考虑省钱、省时的旅行路径.结果表明了该旅行方案的正确性和现实价值.
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
Directory of Open Access Journals (Sweden)
Ufa Ruslan A.
2015-01-01
Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of intelligent electric power systems with active-adaptive electric networks (IES including Flexible Alternating Current Transmission System (FACTS devices. The key requirements for the simulation were formed. The presented analysis of simulation results of IES confirms the need to use a hybrid modelling approach.
Computational Multiqubit Tunnelling in Programmable Quantum Annealers
2016-08-25
classical simulated annealing6 that aims to take advantage of quantum tunnelling. In classical cooling optimization algorithms such as simulated annealing...to have established a quantum speedup. To this end, one would have to demonstrate that no known classical algorithm finds the optimal solution as fast...classical algorithms such as Quantum Monte Carlo or by employing cluster update methods. However, the collective tunnelling phenomena demonstrated here
Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation
Van Esch, J.M.
2010-01-01
Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses
ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing
Wise, John H.; Abel, Tom
2011-07-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.
WISE An Adaptative Simulation of the LHC Optics
Hagen, P; Koutchouk, Jean-Pierre; Risselada, Thys; Sanfilippo, S; Todesco, E; Wildner, E
2006-01-01
The beam dynamics in LHC requires a tight control of the field quality and geometry of the magnets. As the production advances, decisions have to be made on the acceptance of possible imperfections. To ease decision making, an adaptative model of the LHC optics has been built, based on the information available on the day (e.g. magnetic measurements at warm or cold, magnet allocation to machine slots) as well as on statistical evaluations for the missing information (e.g. magnets yet to be built, measured, or for non-allocated slots). The uncertainties are included: relative and absolute measurement errors, warm-to-cold correlations for the fraction of magnets not measured at cold, hysteresis and power supply accuracy. The pre-processor WISE generates instances of the LHC field errors for the MAD-X program, with the possibility of selecting various sources. We present an application to estimate the expected beta-beating.
Block-Structured Adaptive Mesh Refinement Algorithms for Vlasov Simulation
Hittinger, J A F
2012-01-01
Direct discretization of continuum kinetic equations, like the Vlasov equation, are under-utilized because the distribution function generally exists in a high-dimensional (>3D) space and computational cost increases geometrically with dimension. We propose to use high-order finite-volume techniques with block-structured adaptive mesh refinement (AMR) to reduce the computational cost. The primary complication comes from a solution state comprised of variables of different dimensions. We develop the algorithms required to extend standard single-dimension block structured AMR to the multi-dimension case. Specifically, algorithms for reduction and injection operations that transfer data between mesh hierarchies of different dimensions are explained in detail. In addition, modifications to the basic AMR algorithm that enable the use of high-order spatial and temporal discretizations are discussed. Preliminary results for a standard 1D+1V Vlasov-Poisson test problem are presented. Results indicate that there is po...
Simulation Data Management for Adaptive Design Of Experiment
BLONDET, Gaëtan; BOUDAOUD, Nassim; LE DUIGOU, Julien
2015-01-01
International audience; Recent evolutions of computer-aided product development and massive integration of numerical simulations to the design process require new methodologies to decrease the computational costs. Numerical design of experiments is used to increase quality of products by taking into account uncertainties in product development. But, this method can be time-consuming and involves a high computational cost. This paper presents a literature review of design of experiments method...
The numerical simulation tool for the MAORY multiconjugate adaptive optics system
Arcidiacono, Carmelo; Bregoli, Giovanni; Diolaiti, Emiliano; Foppiani, Italo; Agapito, Guido; Puglisi, Alfio; Xompero, Marco; Oberti, Sylvain; Cosentino, Giuseppe; Lombini, Matteo; Butler, Chris R; Ciliegi, Paolo; Cortecchia, Fausto; Patti, Mauro; Esposito, Simone; Feautrier, Philippe
2016-01-01
The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is a hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implements the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and uses libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.
The numerical simulation tool for the MAORY multiconjugate adaptive optics system
Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.
2016-07-01
The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.
Institute of Scientific and Technical Information of China (English)
许闻清; 陈剑
2011-01-01
针对遗传算法和模拟退火算法各自的优缺点,研究将两者联合起来,并通过动态调节交叉概率和变异概率来防止遗传算法的早熟现象,形成改进的遗传模拟退火算法,并将其应用于动力总成悬置系统的优化.%Owing to merits and demerits of genetic algorithm and simulated annealing algorithm,the two algorithms were combined.As the probabilities of crossover and mutation were dynamically adopted to overcome the premature phenomenon of genetic algorithm, A improved genetic simulated annealing algorithm was formed and used in the optimization of powertrain mounting system.
(YIP 2011) Unsteady Output-based Adaptive Simulation of Separated and Transitional Flows
2015-03-19
refined spaces. Using this anisotropy measure, we investigated several adaptive schemes , including time slab bisection, time node redistribution, static...dimensional wing undergoing prescribed flapping motion [14]. In both cases, the output is a lift coefficient at/near the final time of the simulation. 3.5...Y. Luo. Output-based space-time mesh adaptation for the compressible Navier-Stokes equations. Journal of Computational Physics, 230:5753–5773, 2011
Cluster Optimization and Parallelization of Simulations with Dynamically Adaptive Grids
Schreiber, Martin
2013-01-01
The present paper studies solvers for partial differential equations that work on dynamically adaptive grids stemming from spacetrees. Due to the underlying tree formalism, such grids efficiently can be decomposed into connected grid regions (clusters) on-the-fly. A graph on those clusters classified according to their grid invariancy, workload, multi-core affinity, and further meta data represents the inter-cluster communication. While stationary clusters already can be handled more efficiently than their dynamic counterparts, we propose to treat them as atomic grid entities and introduce a skip mechanism that allows the grid traversal to omit those regions completely. The communication graph ensures that the cluster data nevertheless are kept consistent, and several shared memory parallelization strategies are feasible. A hyperbolic benchmark that has to remesh selected mesh regions iteratively to preserve conforming tessellations acts as benchmark for the present work. We discuss runtime improvements resulting from the skip mechanism and the implications on shared memory performance and load balancing. © 2013 Springer-Verlag.
Role-play simulations for climate change adaptation education and engagement
Rumore, Danya; Schenk, Todd; Susskind, Lawrence
2016-08-01
In order to effectively adapt to climate change, public officials and other stakeholders need to rapidly enhance their understanding of local risks and their ability to collaboratively and adaptively respond to them. We argue that science-based role-play simulation exercises -- a type of 'serious game' involving face-to-face mock decision-making -- have considerable potential as education and engagement tools for enhancing readiness to adapt. Prior research suggests role-play simulations and other serious games can foster public learning and encourage collective action in public policy-making contexts. However, the effectiveness of such exercises in the context of climate change adaptation education and engagement has heretofore been underexplored. We share results from two research projects that demonstrate the effectiveness of role-play simulations in cultivating climate change adaptation literacy, enhancing collaborative capacity and facilitating social learning. Based on our findings, we suggest such exercises should be more widely embraced as part of adaptation professionals' education and engagement toolkits.
Optimal Control Problem of Feeding Adaptations of Daphnia and Neural Network Simulation
Kmet', Tibor; Kmet'ov, Mria
2010-09-01
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints and open final time. The optimal control problem is transcribed into nonlinear programming problem, which is implemented with adaptive critic neural network [9] and recurrent neural network for solving nonlinear proprojection equations [10]. The proposed simulation methods is illustrated by the optimal control problem of feeding adaptation of filter feeders of Daphnia. Results show that adaptive critic based systematic approach and neural network solving of nonlinear equations hold promise for obtaining the optimal control with control and state constraints and open final time.
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
Biswas, A.; Sharma, S. P.
2012-12-01
Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the
Supply-chain management based on simulated annealing algorithm%基于模拟退火算法的供应链管理分析
Institute of Scientific and Technical Information of China (English)
董雪
2012-01-01
随着经济全球化的到来，更多的企业将工作重心放在其核心竞争力上，而物流业务也逐渐从生产加工等业务中分离出来。因此，如何有效管理供应商和生产商之间的关系（即供应链管理）已成为当前企业竞争和收益的焦点。以往对于供应链模型的求解往往是基于遗传算法等，虽然成熟有效，但局部搜索能力较差并且计算时间较长。主要应用模拟退火算法对供应链模型的求解问题进行研究和分析，并结合例题说明其有效性。%With the economic globalization, more and more enterprises focus on their core competitiveness. So the logistics operation has been gradually into various fairly independent unit. Therefore, how to manage the relations between suppliers and producers effectively (supply-chain management) becomes a hot topic. The former solutions to the supply-chain model have been based on genetic algorithm,which func- tion is mature and effective, but poor in local search ability and longer for computing time. It provides a simulated annealing algorithm to solve the supply-chain management model, and an example will be given to show its effectiveness.
Directory of Open Access Journals (Sweden)
Maikel Méndez-Morales
2014-09-01
Full Text Available En este artículo se presenta la aplicación del algoritmo Simulated Annealing (SA en el diseño óptimo de un sistema de distribución de agua (SDA. El SA es un algoritmo metaheurístico de búsqueda, basado en una analogía entre el proceso de recocido en metales (proceso controlado de enfriamiento de un cuerpo y la solución de problemas de optimización combinatorios. El algoritmo SA, junto con diversos modelos matemáticos, ha sido utilizado exitosamente en el óptimo diseño de SDA. Como caso de estudio se utilizó el SDA a escala real de la comunidad de Marsella, en San Carlos, Costa Rica. El algoritmo SA fue implementado mediante el conocido modelo EPANET, a través de la extensión WaterNetGen. Se compararon tres diferentes variaciones automatizadas del algoritmo SA con el diseño manual del SDA Marsella llevado a cabo a prueba y error, utilizando únicamente costos unitarios de tuberías. Los resultados muestran que los tres esquemas automatizados del SA arrojaron costos unitarios por debajo del 0.49 como fracción, respecto al costo original del esquema de diseño ejecutado a prueba y error. Esto demuestra que el algoritmo SA es capaz de optimizar problemas combinatorios ligados al diseño de mínimo costo de los sistemas de distribución de agua a escala real.
BENDING RAY-TRACING BASED ON SIMULATED ANNEALING METHOD%基于模拟退火法的弯曲射线追踪
Institute of Scientific and Technical Information of China (English)
周竹生; 谢金伟
2011-01-01
This paper proposes a new ray-tracing method based on the concept of simulated annealing. With the new method, not only the problem that the traditional ray-tracing method is over dependent on pre - established initial ray-paths is well solved, but also the quality of desirable ray-paths construction and the associated traveltime calculation between fixed sources and receivers is ensured, even if the model is of much complicated velocity-field. As a result, the ray-paths whose traveltime approach is overall minimum are searched out successfully. Furthermore, the algorithm may calculate ray-paths with local extreme lower traveltime too and restrict them easily by instructing rays to pass through some fixed points. The feasibility and stability of the method have been proved by trial results of theoretical models.%提出了一种新的射线追踪方法——模拟退火法.新方法不仅较好地解决了传统射线追踪方法过分依赖初始模型的问题,而且对于复杂速度场模型也能保证在固定的发射与接收点之间构建令人满意的射线路径及其相应的走时,搜索到满足旅行时全局最小的射线路径.此外,新方法还可计算局部最小旅行时,并可方便地通过指定射线经过固定点来对射线路径进行限制.理论模型的试算结果证明了该方法的可行性和稳健性.
Energy Technology Data Exchange (ETDEWEB)
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David
2012-04-11
A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling
Fonville, Judith M; Bylesjö, Max; Coen, Muireann; Nicholson, Jeremy K; Holmes, Elaine; Lindon, John C; Rantalainen, Mattias
2011-10-31
Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of Trypanosoma brucei brucei infection. Automated and user-friendly procedures for the kernel-optimization have been incorporated into version 1.1.1 of the freely available K-OPLS software package for both R and Matlab to enable easy application of K-OPLS for non-linear prediction modeling.
Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest
2016-05-01
In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.
时空模型结合模拟退火进行脑磁源的定位%Spatio-Temporal MEG Source Localization Using Simulated Annealing
Institute of Scientific and Technical Information of China (English)
霍小林; 李军; 刘正东
2001-01-01
Locating the sources of brain magnetic fields is a basic problem of magnetoencephalography (MEG). The locating of multiple current dipole is a difficult problem for the inverse study of MEG. A method combining Spatio-Temporal Source Modeling with Simulated Annealing to locate multiple current dipoles, is presented through studying the STSM of MEG.This method can overcome the shortcoming of other optimal methods to avoid being trapped in a local minimum. The dipole parameters can be separated into linear and nonlinear components. The optimization dimensions can be reduced greatly by just optimizing the nonlinear components only. Compared with the MUSIC (MUltiple Signal Classification), this method can cut down requirements of independence of the dipole sources correspondingly.%脑磁源的定位问题是脑磁图(magnetoencephalography, MEG)研究的一个基本问题。其中多偶极子定位是脑磁逆问题研究当中的难点。本文通过研究脑磁图的时空模型STSM (spatio-temporal source modeling),提出将时空模型与模拟退火相结合进行多偶极子的定位，以克服其他优化方法易落入局部极小的不足。时空模型中偶极子参数经分解可分为线性部分和非线形部分，只对非线性部分进行模拟退火优化大大降低了优化空间的维数。通过与MUSIC (MUltiple SIgnal Classification)方法的比较，发现将时空模型与模拟退火相结合可以相对降低对源信号独立性的要求。
Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim
2015-09-01
We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.
EVENT-DRIVEN SIMULATION OF INTEGRATE-AND-FIRE MODELS WITH SPIKE-FREQUENCY ADAPTATION
Institute of Scientific and Technical Information of China (English)
Lin Xianghong; Zhang Tianwen
2009-01-01
The evoked spike discharges of a neuron depend critically on the recent history of its electrical activity. A well-known example is the phenomenon of spike-frequency adaptation that is a commonly observed property of neurons. In this paper, using a leaky integrate-and-fire model that includes an adaptation current, we propose an event-driven strategy to simulate integrate-and-fire models with spike-frequency adaptation. Such approach is more precise than traditional clock-driven numerical integration approach because the timing of spikes is treated exactly. In experiments, using event-driven and clock-driven strategies we simulated the adaptation time course of single neuron and the random network with spike-timing dependent plasticity, the results indicate that (1) the temporal precision of spiking events impacts on neuronal dynamics of single as well as network in the different simulation strategies and (2) the simulation time scales linearly with the total number of spiking events in the event-driven simulation strategies.
Modernizing quantum annealing using local searches
Chancellor, Nicholas
2017-02-01
I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques.
Resolution-Adapted All-Atomic and Coarse-Grained Model for Biomolecular Simulations.
Shen, Lin; Hu, Hao
2014-06-10
We develop here an adaptive multiresolution method for the simulation of complex heterogeneous systems such as the protein molecules. The target molecular system is described with the atomistic structure while maintaining concurrently a mapping to the coarse-grained models. The theoretical model, or force field, used to describe the interactions between two sites is automatically adjusted in the simulation processes according to the interaction distance/strength. Therefore, all-atomic, coarse-grained, or mixed all-atomic and coarse-grained models would be used together to describe the interactions between a group of atoms and its surroundings. Because the choice of theory is made on the force field level while the sampling is always carried out in the atomic space, the new adaptive method preserves naturally the atomic structure and thermodynamic properties of the entire system throughout the simulation processes. The new method will be very useful in many biomolecular simulations where atomistic details are critically needed.
Dynamically adaptive Lattice Boltzmann simulation of shallow water flows with the Peano framework
Neumann, Philipp
2015-09-01
© 2014 Elsevier Inc. All rights reserved. We present a dynamically adaptive Lattice Boltzmann (LB) implementation for solving the shallow water equations (SWEs). Our implementation extends an existing LB component of the Peano framework. We revise the modular design with respect to the incorporation of new simulation aspects and LB models. The basic SWE-LB implementation is validated in different breaking dam scenarios. We further provide a numerical study on stability of the MRT collision operator used in our simulations.
The Self-Adaptive Fuzzy PID Controller in Actuator Simulated Loading System
Directory of Open Access Journals (Sweden)
Chuanhui Zhang
2013-05-01
Full Text Available This paper analyzes the structure principle of the actuator simulated loading system with variable stiffness, and establishes the simplified model. What’s more, it also does a research on the application of the self-adaptive tuning of fuzzy PID(Proportion Integration Differentiation in actuator simulated loading system with variable stiffness. Because the loading system is connected with the steering system by a spring rod, there must be strong coupling. Besides, there are also the parametric variations accompanying with the variations of the stiffness. Based on compensation from the feed-forward control on the disturbance brought by the motion of steering engine, the system performance can be improved by using fuzzy adaptive adjusting PID control to make up the changes of system parameter caused by the changes of the stiffness. By combining the fuzzy control with traditional PID control, fuzzy adaptive PID control is able to choose the parameters more properly.
3D Simulation of Flow with Free Surface Based on Adaptive Octree Mesh System
Institute of Scientific and Technical Information of China (English)
Li Shaowu; Zhuang Qian; Huang Xiaoyun; Wang Dong
2015-01-01
The technique of adaptive tree mesh is an effective way to reduce computational cost through automatic adjustment of cell size according to necessity. In the present study, the 2D numerical N-S solver based on the adaptive quadtree mesh system was extended to a 3D one, in which a spatially adaptive octree mesh system and multiple parti-cle level set method were adopted for the convenience to deal with the air-water-structure multiple-medium coexisting domain. The stretching process of a dumbbell was simulated and the results indicate that the meshes are well adaptable to the free surface. The collapsing process of water column impinging a circle cylinder was simulated and from the results, it can be seen that the processes of fluid splitting and merging are properly simulated. The interaction of sec-ond-order Stokes waves with a square cylinder was simulated and the obtained drag force is consistent with the result by the Morison’s wave force formula with the coefficient values of the stable drag component and the inertial force component being set as 2.54.
Largenet2: an object-oriented programming library for simulating large adaptive networks
Zschaler, Gerd
2012-01-01
The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://rincedd.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)
2014-10-01
Quadrature Phase Shift Keying (SOQPSK), Orthogonal Frequency Division Multiplexing (OFDM), Bit Error Rate, ( BER ) 16. SECURITY CLASSIFICATION OF...Example: Link-Dependent Adaptive Radio • Other Applications: • Tradeoffs of Phased Array Antennas • Utility of Multiple access schemes • Performance...GTRI_B-‹#› Simulation Framework Architecture 5 • Object-oriented MATLAB to maximize reusability and flexibility Phase
Large-scale microstructural simulation of load-adaptive bone remodeling in whole human vertebrae
Badilatti, Sandro D.; Christen, Patrik; Levchuk, Alina; Hazrati Marangalou, Javad; Rietbergen, van Bert; Parkinson, Ian; Müller, Ralph
2016-01-01
Identification of individuals at risk of bone fractures remains challenging despite recent advances in bone strength assessment. In particular, the future degradation of the microstructure and load adaptation has been disregarded. Bone remodeling simulations have so far been restricted to small-volu
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Sagert, I; Fattoyev, F J; Postnikov, S; Horowitz, C J
2015-01-01
Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. In this work, we present proof-of-principle 3D Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). We perform benchmark studies of $^{16} \\mathrm{O}$, $^{208} \\mathrm{Pb}$ and $^{238} \\mathrm{U}$ nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so...
Energy Technology Data Exchange (ETDEWEB)
Monsalve, A.; Artigas, A.; Celentano, D.; Melendez, F.
2004-07-01
The heating and cooling curves during batch annealing process of low carbon steel have been modeled using the finite element technique. This has allowed to predict the transient thermal profile for every point of the annealed coils, particularly for the hottest and coldest ones. Through experimental measurements, the results have been adequately validated since a good agreement has been found between experimental values and those predicted by the model. Moreover, an Avrami recrystallization model. Moreover, and Avrami recrystallization model has been coupled to this thermal balance computation. Interrupted annealing experiments have been made by measuring the recrystallized fraction on the extreme points of the coil foe different times. These data gave the possibility to validate the developed recrystallization model through a reasonably good numerical-experimental fittings. (Author) 6 refs.
Simulation and Performance Analysis of Adaptive Filtering Algorithms in Noise Cancellation
Ferdouse, Lilatul; Nipa, Tamanna Haque; Jaigirdar, Fariha Tasmin
2011-01-01
Noise problems in signals have gained huge attention due to the need of noise-free output signal in numerous communication systems. The principal of adaptive noise cancellation is to acquire an estimation of the unwanted interfering signal and subtract it from the corrupted signal. Noise cancellation operation is controlled adaptively with the target of achieving improved signal to noise ratio. This paper concentrates upon the analysis of adaptive noise canceller using Recursive Least Square (RLS), Fast Transversal Recursive Least Square (FTRLS) and Gradient Adaptive Lattice (GAL) algorithms. The performance analysis of the algorithms is done based on convergence behavior, convergence time, correlation coefficients and signal to noise ratio. After comparing all the simulated results we observed that GAL performs the best in noise cancellation in terms of Correlation Coefficient, SNR and Convergence Time. RLS, FTRLS and GAL were never evaluated and compared before on their performance in noise cancellation in ...
Adaptive Wavelet Collocation Method for Simulation of Time Dependent Maxwell's Equations
Li, Haojun; Rieder, Andreas; Freude, Wolfgang
2012-01-01
This paper investigates an adaptive wavelet collocation time domain method for the numerical solution of Maxwell's equations. In this method a computational grid is dynamically adapted at each time step by using the wavelet decomposition of the field at that time instant. In the regions where the fields are highly localized, the method assigns more grid points; and in the regions where the fields are sparse, there will be less grid points. On the adapted grid, update schemes with high spatial order and explicit time stepping are formulated. The method has high compression rate, which substantially reduces the computational cost allowing efficient use of computational resources. This adaptive wavelet collocation method is especially suitable for simulation of guided-wave optical devices.
Woo, Jihwan; Miller, Charles A; Abbas, Paul J
2009-05-01
The Hodgkin-Huxley (HH) model does not simulate the significant changes in auditory nerve fiber (ANF) responses to sustained stimulation that are associated with neural adaptation. Given that the electric stimuli used by cochlear prostheses can result in adapted responses, a computational model incorporating an adaptation process is warranted if such models are to remain relevant and contribute to related research efforts. In this paper, we describe the development of a modified HH single-node model that includes potassium ion ( K(+)) concentration changes in response to each action potential. This activity-related change results in an altered resting potential, and hence, excitability. Our implementation of K(+)-related changes uses a phenomenological approach based upon K(+) accumulation and dissipation time constants. Modeled spike times were computed using repeated presentations of modeled pulse-train stimuli. Spike-rate adaptation was characterized by rate decrements and time constants and compared against ANF data from animal experiments. Responses to relatively low (250 pulse/s) and high rate (5000 pulse/s) trains were evaluated and the novel adaptation model results were compared against model results obtained without the adaptation mechanism. In addition to spike-rate changes, jitter and spike intervals were evaluated and found to change with the addition of modeled adaptation. These results provide one means of incorporating a heretofore neglected (although important) aspect of ANF responses to electric stimuli. Future studies could include evaluation of alternative versions of the adaptation model elements and broadening the model to simulate a complete axon, and eventually, a spatially realistic model of the electrically stimulated nerve within extracochlear tissues.
Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses
Energy Technology Data Exchange (ETDEWEB)
Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)
2014-10-15
Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.
Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko
2002-11-01
We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.
Adaptive finite element simulation of flow and transport applications on parallel computers
Kirk, Benjamin Shelton
The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the
Xia, Peng; Hu, Jie; Peng, Yinghong
2015-12-01
Retinal prostheses for the restoration of functional vision are under development and visual prostheses targeting proximal stages of the visual pathway are also being explored. To investigate the experience with visual prostheses, psychophysical experiments using simulated prosthetic vision in normally sighted individuals are necessary. In this study, a helmet display with real-time images from a camera attached to the helmet provided the simulated vision, and experiments of recognition and discriminating multiple objects were used to evaluate visual performance under different parameters (gray scale, distortion, and dropout). The process of fitting and training with visual prostheses was simulated and estimated by adaptation to the parameters with time. The results showed that the increase in the number of gray scale and the decrease in phosphene distortion and dropout rate improved recognition performance significantly, and the recognition accuracy was 61.8 ± 7.6% under the optimum condition (gray scale: 8, distortion: k = 0, dropout: 0%). The adaption experiments indicated that the recognition performance was improved with time and the effect of adaptation to distortion was greater than dropout, which implies the difference of adaptation mechanism to the two parameters.
Pawlik, Andreas H; Vecchia, Claudio Dalla
2015-01-01
We present a suite of cosmological radiation-hydrodynamical simulations of the assembly of galaxies driving the reionization of the intergalactic medium (IGM) at z >~ 6. The simulations account for the hydrodynamical feedback from photoionization heating and the explosion of massive stars as supernovae (SNe). Our reference simulation, which was carried out in a box of size 25 comoving Mpc/h using 2 x 512^3 particles, produces a reasonable reionization history and matches the observed UV luminosity function of galaxies. Simulations with different box sizes and resolutions are used to investigate numerical convergence, and simulations in which either SNe or photoionization heating or both are turned off, are used to investigate the role of feedback from star formation. Ionizing radiation is treated using accurate radiative transfer at the high spatially adaptive resolution at which the hydrodynamics is carried out. SN feedback strongly reduces the star formation rates (SFRs) over nearly the full mass range of s...
Childers, J T; LeCompte, T J; Papka, M E; Benjamin, D P
2015-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Institute of Scientific and Technical Information of China (English)
宛剑业; 张飞超; 高丽媛; 刘卫博
2016-01-01
针对YY企业电子油门生产车间的布局规划，分别采用了传统的SLP方法和遗传模拟退火算法，并利用 Proplanner 软件对其两种方法获得的方案1、2进行了仿真研究。仿真结果表明在零部件的搬运距离、搬运时间、搬运成本三方面，方案2明显优于方案1。从而说明在车间布局规划方面，遗传模拟退火算法比SLP更具可行性与合理性。%To study the layout planning for electronic accelerator production workshop of YY company, this paper uses SLP method and the genetic simulated annealing hybrid algorithm, and the Proplanner software to conduct the simulation research of the program 1, 2 obtained on the two methods. The simulation results show that the program 2 is obviously better than program 1 in the parts of the distance of transportation, handling time, handling cost, which means that the genetic simulated annealing hybrid algorithm is more feasible and rational than the SLP in the layout of a workshop.
Institute of Scientific and Technical Information of China (English)
罗晨; 李渊; 刘勇; 刘晓明
2012-01-01
Aiming at the shortcomings of normal genetic algorithm that its convergence speed is slow in task allocation, baaed on giving the formal specification of task allocation in multi-agent system, this paper proposed a simulated annealing genetic algorithm (SAGA) by integrating simulated annealing, presented the basic thought and pivotal steps of SAGA in detail, and validated the algorithm by simulation experiment. The simulation results illustrate thai SAGA has better convergence speed and optimal results than normal genetic algorithm.%针对标准的遗传算法在任务分配中收敛速度慢的问题,对多agent系统中的任务分配进行形式化描述的基础上,融合模拟退火算法的优化思想,提出了一种基于模拟退火遗传算法的任务分配方法,详细阐述了该算法的基本思想和关键步骤,并通过仿真实验进行验证.仿真实验结果表明,基于模拟退火遗传算法比标准的遗传算法具有更快的收敛速度和寻优效果.
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
Directory of Open Access Journals (Sweden)
Robert eBauer
2015-02-01
Full Text Available Restorative brain-computer interfaces (BCI are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation.In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
3D design and electric simulation of a silicon drift detector using a spiral biasing adapter
Li, Yu-yun; Xiong, Bo; Li, Zheng
2016-09-01
The detector system of combining a spiral biasing adapter (SBA) with a silicon drift detector (SBA-SDD) is largely different from the traditional silicon drift detector (SDD), including the spiral SDD. It has a spiral biasing adapter of the same design as a traditional spiral SDD and an SDD with concentric rings having the same radius. Compared with the traditional spiral SDD, the SBA-SDD separates the spiral's functions of biasing adapter and the p-n junction definition. In this paper, the SBA-SDD is simulated using a Sentaurus TCAD tool, which is a full 3D device simulation tool. The simulated electric characteristics include electric potential, electric field, electron concentration, and single event effect. Because of the special design of the SBA-SDD, the SBA can generate an optimum drift electric field in the SDD, comparable with the conventional spiral SDD, while the SDD can be designed with concentric rings to reduce surface area. Also the current and heat generated in the SBA are separated from the SDD. To study the single event response, we simulated the induced current caused by incident heavy ions (20 and 50 μm penetration length) with different linear energy transfer (LET). The SBA-SDD can be used just like a conventional SDD, such as X-ray detector for energy spectroscopy and imaging, etc.
Simulation of Old Urban Residential Area Evolution Based on Complex Adaptive System
Institute of Scientific and Technical Information of China (English)
YANG Fan; WANG Xiao-ming; HUA Hong
2009-01-01
On the basis of complex adaptive system theory,this paper proposed an agent-based model of old urban residential area,in which,residents and providers are the two adaptive agents.The behaviors of residents and providers in this model are trained with back propagation and simulated with Swarm software based on environment-rules-agents interaction.This model simulates the evolution of old urban residential area and analyzes the relations between the evolution and urban management with the background of Chaozhou city.As a result,the following are obtained:(1) Simulation without government intervention indicates the trend of housing ageing,environmental deterioration,economic depression,and social filtering-down in old urban residential area.If the development of old urban residential area is under control of developers in market,whose desire is profit maximization,and factors such as social justice,historic and culture value will be ignored.(2) If the government carries out some policies and measures which will perfectly serve their original aims,simulation reveals that old urban residential area could be adapted to environment and keep sustainable development.This conclusion emphasizes that government must act as initiator and program maker for guiding residents and other providers directly in the development of old urban residential area.
Görbil, Gökçe; Gelenbe, Erol
The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.
Sahni, Onkar; Jansen, Kenneth; Shephard, Mark; Taylor, Charles
2007-11-01
Flow within the healthy human vascular system is typically laminar but diseased conditions can alter the geometry sufficiently to produce transitional/turbulent flows in regions focal (and immediately downstream) of the diseased section. The mean unsteadiness (pulsatile or respiratory cycle) further complicates the situation making traditional turbulence simulation techniques (e.g., Reynolds-averaged Navier-Stokes simulations (RANSS)) suspect. At the other extreme, direct numerical simulation (DNS) while fully appropriate can lead to large computational expense, particularly when the simulations must be done quickly since they are intended to affect the outcome of a medical treatment (e.g., virtual surgical planning). To produce simulations in a clinically relevant time frame requires; 1) adaptive meshing technique that closely matches the desired local mesh resolution in all three directions to the highly anisotropic physical length scales in the flow, 2) efficient solution algorithms, and 3) excellent scaling on massively parallel computers. In this presentation we will demonstrate results for a subject-specific simulation of an abdominal aortic aneurysm using stabilized finite element method on anisotropically adapted meshes consisting of O(10^8) elements over O(10^4) processors.
Institute of Scientific and Technical Information of China (English)
张扬; 杨松涛; 张香芝
2012-01-01
研究无线传感器网络( WSN)数据融合技术.传感器节点计算能力、通信能力有限,WSN采用交叉重叠方式部署,导致冗余数据量大,需采用数据融合技术消除冗余和无效数据,节约网络通信能耗.结合遗传算法全局搜索和模拟退火算法局部搜索的优点,提出一种模拟退火遗传算法的WSN数据融合方法(SA-GA).采用模拟退火遗传算法快速找到移动代理路由最优传感器节点序列,并实现数据融合.仿真实验结果表明,与遗传算法、模拟退火算法相比,SA-GA更能快速找到全局最优数据融合节点序列,并对数据进行有效融合,具有更小的网络能耗和网络延时.%This paper researched the wireless sensor network ( WSN) data fusion. Sensor node computing ability and communication ability were limited. WSN used overlapping deployment, leading to large redundant data quantity, so as to use the data fusion technology to eliminate redundancy and invalid data, save network communication energy. Combination of genetic algorithm and simulated annealing algorithm for global search and local search advantages, this paper proposed a simulated annealing genetic algorithm ( SA-GA ) WSN data fusion method. By using simulated annealing genetic algorithm, it could quickly find the mobile agent routing optimal sensor node sequence and fuse the data. The simulation results show that, comparing with the genetic algorithm and simulated annealing algorithm, SA-GA can quickly find optimal data fusion node sequence, integrate the data effectively, and it has smaller energy consumption of the network and network delay.
Institute of Scientific and Technical Information of China (English)
路鹏; 丛晓; 周东岱
2013-01-01
With the application of artificial intelligence techniques in the field of educational evalua-tion, the computerized adaptive testing gradually becomes one of the most important educational evaluation methods.In such test, the computer can dynamically update the ability level of the learn-er and select tailored questions from the examination questions bank .It is required that the system has a relatively high efficiency of the implementation in order to meet the needs of the test .To solve this problem , the intelligent questions system based on simulated annealing algorithm is proposed . The experimental results show that while the method can ensure the selection of nearly optimal ques -tions from the examination questions bank for learners , it also greatly improve the efficiency of choo-sing questions from the system .%随着人工智能技术在教育评价领域中的应用，计算机自适应测试逐渐成为一种重要的教育评价方式。采用这种测试形式，计算机实时的对学习者的能力水平进行动态更新并从题库中为其选择量身定制的试题，这就要求系统具有比较高的执行效率，才能满足实际应用的需要。为了解决这个问题，提出了基于模拟退火算法来构建智能试题产生系统的方法。实验结果表明，该方法在保证从题库中为学习者选择接近最优试题的同时，也极大提高了系统的选题效率。
Validation Through Simulations of a Cn2 Profiler for the ESO/VLT Adaptive Optics Facility
Garcia-Rissmann, A; Kolb, J; Louarn, M Le; Madec, P -Y; Neichel, B
2015-01-01
The Adaptive Optics Facility (AOF) project envisages transforming one of the VLT units into an adaptive telescope and providing its ESO (European Southern Observatory) second generation instruments with turbulence corrected wavefronts. For MUSE and HAWK-I this correction will be achieved through the GALACSI and GRAAL AO modules working in conjunction with a 1170 actuators Deformable Secondary Mirror (DSM) and the new Laser Guide Star Facility (4LGSF). Multiple wavefront sensors will enable GLAO and LTAO capabilities, whose performance can greatly benefit from a knowledge about the stratification of the turbulence in the atmosphere. This work, totally based on end-to-end simulations, describes the validation tests conducted on a Cn2 profiler adapted for the AOF specifications. Because an absolute profile calibration is strongly dependent on a reliable knowledge of turbulence parameters r0 and L0, the tests presented here refer only to normalized output profiles. Uncertainties in the input parameters inherent t...
Buntemeyer, Lars; Peters, Thomas; Klassen, Mikhail; Pudritz, Ralph E
2015-01-01
We present an algorithm for solving the radiative transfer problem on massively parallel computers using adaptive mesh refinement and domain decomposition. The solver is based on the method of characteristics which requires an adaptive raytracer that integrates the equation of radiative transfer. The radiation field is split into local and global components which are handled separately to overcome the non-locality problem. The solver is implemented in the framework of the magneto-hydrodynamics code FLASH and is coupled by an operator splitting step. The goal is the study of radiation in the context of star formation simulations with a focus on early disc formation and evolution. This requires a proper treatment of radiation physics that covers both the optically thin as well as the optically thick regimes and the transition region in particular. We successfully show the accuracy and feasibility of our method in a series of standard radiative transfer problems and two 3D collapse simulations resembling the ear...
Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain
2010-06-01
This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.
End to end numerical simulations of the MAORY multiconjugate adaptive optics system
Arcidiacono, Carmelo; Bregoli, Giovanni; Diolaiti, Emiliano; Foppiani, Italo; Cosentino, Giuseppe; Lombini, Matteo; Butler, R C; Ciliegi, Paolo
2014-01-01
MAORY is the adaptive optics module of the E-ELT that will feed the MICADO imaging camera through a gravity invariant exit port. MAORY has been foreseen to implement MCAO correction through three high order deformable mirrors driven by the reference signals of six Laser Guide Stars (LGSs) feeding as many Shack-Hartmann Wavefront Sensors. A three Natural Guide Stars (NGSs) system will provide the low order correction. We develop a code for the end-to-end simulation of the MAORY adaptive optics (AO) system in order to obtain high-delity modeling of the system performance. It is based on the IDL language and makes extensively uses of the GPUs. Here we present the architecture of the simulation tool and its achieved and expected performance.
Multi-GPU adaptation of a simulator of heart electric activity
Directory of Open Access Journals (Sweden)
Víctor M. García
2013-12-01
Full Text Available The simulation of the electrical activity of the heart is calculated by solving a large system of ordinary differential equations; this takes an enormous amount of computation time. In recent years graphics processing unit (GPU are being introduced in the field of high performance computing. These powerful computing devices have attracted research groups requiring simulate the electrical activity of the heart. The research group signing this paper has developed a simulator of cardiac electrical activity that runs on a single GPU. This article describes the adaptation and modification of the simulator to run on multiple GPU. The results confirm that the technique significantly reduces the execution time compared to those obtained with a single GPU, and allows the solution of larger problems.
Scale-adaptive simulation of a hot jet in cross flow
Energy Technology Data Exchange (ETDEWEB)
Duda, B M; Esteve, M-J [AIRBUS Operations S.A.S., Toulouse (France); Menter, F R; Hansen, T, E-mail: benjamin.duda@airbus.com [ANSYS Germany GmbH, Otterfing (Germany)
2011-12-22
The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.
Scale-adaptive simulation of a hot jet in cross flow
Duda, B. M.; Menter, F. R.; Hansen, T.; Esteve, M.-J.
2011-12-01
The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.
Simulating computer adaptive testing with the Mood and Anxiety Symptom Questionnaire.
Flens, Gerard; Smits, Niels; Carlier, Ingrid; van Hemert, Albert M; de Beurs, Edwin
2016-08-01
In a post hoc simulation study (N = 3,597 psychiatric outpatients), we investigated whether the efficiency of the 90-item Mood and Anxiety Symptom Questionnaire (MASQ) could be improved for assessing clinical subjects with computerized adaptive testing (CAT). A CAT simulation was performed on each of the 3 MASQ subscales (Positive Affect, Negative Affect, and Somatic Anxiety). With the CAT simulation's stopping rule set at a high level of measurement precision, the results showed that patients' test administration can be shortened substantially; the mean decrease in items used for the subscales ranged from 56% up to 74%. Furthermore, the predictive utility of the CAT simulations was sufficient for all MASQ scales. The findings reveal that developing a MASQ CAT for clinical subjects is useful as it leads to more efficient measurement without compromising the reliability of the test outcomes. (PsycINFO Database Record
Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries
Deiterding, Ralf; Wood, Stephen L.
2016-09-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.
GLAMER Part I: A Code for Gravitational Lensing Simulations with Adaptive Mesh Refinement
Metcalf, R Benton
2013-01-01
A computer code is described for the simulation of gravitational lensing data. The code incorporates adaptive mesh refinement in choosing which rays to shoot based on the requirements of the source size, location and surface brightness distribution or to find critical curves/caustics. A variety of source surface brightness models are implemented to represent galaxies and quasar emission regions. The lensing mass can be represented by point masses (stars), smoothed simulation particles, analytic halo models, pixelized mass maps or any combination of these. The deflection and beam distortions (convergence and shear) are calculated by modified tree algorithm when halos, point masses or particles are used and by FFT when mass maps are used. The combination of these methods allow for a very large dynamical range to be represented in a single simulation. Individual images of galaxies can be represented in a simulation that covers many square degrees. For an individual strongly lensed quasar, source sizes from the s...
Leo, Jennifer; Goodwin, Donna
2014-04-01
Disability simulations have been used as a pedagogical tool to simulate the functional and cultural experiences of disability. Despite their widespread application, disagreement about their ethical use, value, and efficacy persists. The purpose of this study was to understand how postsecondary kinesiology students experienced participation in disability simulations. An interpretative phenomenological approach guided the study's collection of journal entries and clarifying one-on-one interviews with four female undergraduate students enrolled in a required adapted physical activity course. The data were analyzed thematically and interpreted using the conceptual framework of situated learning. Three themes transpired: unnerving visibility, negotiating environments differently, and tomorrow I'll be fine. The students described emotional responses to the use of wheelchairs as disability artifacts, developed awareness of environmental barriers to culturally and socially normative activities, and moderated their discomfort with the knowledge they could end the simulation at any time.
Ostermeir, Katja; Zacharias, Martin
2014-01-15
A Hamiltonian Replica-Exchange Molecular Dynamics (REMD) simulation method has been developed that employs a two-dimensional backbone and one-dimensional side chain biasing potential specifically to promote conformational transitions in peptides. To exploit the replica framework optimally, the level of the biasing potential in each replica was appropriately adapted during the simulations. This resulted in both high exchange rates between neighboring replicas and improved occupancy/flow of all conformers in each replica. The performance of the approach was tested on several peptide and protein systems and compared with regular MD simulations and previous REMD studies. Improved sampling of relevant conformational states was observed for unrestrained protein and peptide folding simulations as well as for refinement of a loop structure with restricted mobility of loop flanking protein regions.
Simulation of macromolecular liquids with the adaptive resolution molecular dynamics technique
Peters, J. H.; Klein, R.; Delle Site, L.
2016-08-01
We extend the application of the adaptive resolution technique (AdResS) to liquid systems composed of alkane chains of different lengths. The aim of the study is to develop and test the modifications of AdResS required in order to handle the change of representation of large molecules. The robustness of the approach is shown by calculating several relevant structural properties and comparing them with the results of full atomistic simulations. The extended scheme represents a robust prototype for the simulation of macromolecular systems of interest in several fields, from material science to biophysics.
Institute of Scientific and Technical Information of China (English)
朱均燕; 温永仙
2013-01-01
在传统的模拟退火算法基础上,对于产生新解边界值的处理给出一种新方法,并将它应用到二维Toy模型.对4条Fibonacci序列进行了结构预测,结果表明该算法可行有效.%A new method for the treatment of new border values on the basis of traditional simulated annealing algorithm was proposed , and it was applied to Toy model. The structure of four Fibonacci sequences was predicted, the results showed that the algorithm was feasible and effective.
Kozdon, J. E.; Wilcox, L.; Aranda, A. R.
2014-12-01
The goal of this work is to develop a new set of simulation tools for earthquake rupture dynamics based on state-of-the-art high-order, adaptive numerical methods capable of handling complex geometries. High-order methods are ideal for earthquake rupture simulations as the problems are wave-dominated and the waves excited in simulations propagate over distance much larger than their fundamental wavelength. When high-order methods are used for such problems significantly fewer degrees of freedom are required as compared with low-order methods. The base numerical method in our new software elements is a discontinuous Galerkin method based on curved, Kronecker product hexahedral elements. We currently use MPI for off-node parallelism and are in the process of exploring strategies for on-node parallelism. Spatial mesh adaptivity is handled using the p4est library and temporal adaptivity is achieved through an Adams-Bashforth based local time stepping method; we are presently in the process of including dynamic spatial adaptivity which we believe will be valuable for capturing the small-scale features around the propagating rupture front. One of the key features of our software elements is that the method is provably stable, even after the inclusion of the nonlinear frictions laws which govern rupture dynamics. In this presentation we will both outline the structure of the software elements as well as validate the rupture dynamics with SCEC benchmark test problems. We are also presently developing several realistic simulation geometries which may also be reported on. Finally, the software elements that we have designed are fully public domain and have been designed with tightly coupled, wave dominated multiphysics applications in mind. This latter design decisions means the software elements are applicable to many other geophysical and non-geophysical applications.
Institute of Scientific and Technical Information of China (English)
覃德泽
2011-01-01
提出一种基于模拟退火的优化算法来解决路由问题.模拟退火算法以加权累积期望传输时间为代价函数来寻找最佳路由方式.系统仿真基于802.11无线网络,分别比较使用基于模拟退火的路由算法和最短路由算法情况下的网络吞吐量和丢包率.仿真结果显示,基于模拟退火的路由算法比最短路由算法具有更好的性能.%An optimization algorithm based on simulated annealing was proposed to solve the routing problem. The simulated annealing (SA) algorithm looked for the best routing strategy by taking the weighted cumulative expectations transmission time as the cost function. The simulation that operated based on 802. 11 wireless networks compared the network throughput and the packet loss ratio by using SA-based routing algorithm and the shortest path routing strategy respectively. The results show that SA-based routing algorithm had a better performance than that of the shortest path routing algorithm.
Institute of Scientific and Technical Information of China (English)
王家文; 王岩; 陈前; 李伟; 陈钰青; 靳书岩; 牛伟; 陈凤霞
2014-01-01
以热模拟实验为基础，建立固溶态GH4169合金的动态再结晶模型，应用DEFORM-3D有限元软件模拟圆柱状试样在不同压缩变形条件下的动态再结晶体积分数分布；结合金相定量分析、电子背散射衍射(Electron backsatter diffraction (EBSD))分析及有限元模拟结果，对比研究变形参数对圆柱状GH4169合金心部微观组织的影响。研究结果表明：升高变形温度及降低应变速率，均可促进圆柱状GH4169合金热模拟压缩试样变形的均匀性；应变速率的降低可加速GH4169合金中小角度晶界向大角度晶界的转变过程；GH4169合金的动态再结晶形核机制为以原始晶界为主的非连续动态再结晶，在试验变形条件下，孪晶界的演化对动态再结晶过程起重要作用；同时，分析实验结果与模拟结果之间的差异及其原因。%Dynamic recrystallization (DRX) model of the annealed GH4169 alloy was established based on the thermal-mechanical simulation tests. The finite element analysis software DEFORM-3D was introduced to simulating the DRX volume of the cylindrical annealed GH4169 alloy under different deformation conditions. Combined quantitative metallographic analysis, electron backscatter diffraction (EBSD) analysis with finite element analysis, the effects of the deformation parameters on the microstructures of the center for the cylindrical samples were investigated. The results show that increasing the deformation temperature and lowering the strain rate would promote the deformation homogeneity of the cylindrical samples during thermal-mechanical simulation tests. The transformation procedure of grain boundaries with low angles and with high angles is accelerated with decreasing the strain rate. The nucleation mechanism of the dynamic recrystallization for the alloy is the discontinuous one dominated mainly by the bulging of the original grain boundaries. Under the tested conditions, the evolution of
Institute of Scientific and Technical Information of China (English)
Yao Jianyong; Jiao Zongxia; Han Songshan
2013-01-01
Low-velocity tracking capability is a key performance of flight motion simulator (FMS),which is mainly affected by the nonlinear friction force.Though many compensation schemes with ad hoc friction models have been proposed,this paper deals with low-velocity control without friction model,since it is easy to be implemented in practice.Firstly,a nonlinear model of the FMS middle frame,which is driven by a hydraulic rotary actuator,is built.Noting that in the low velocity region,the unmodeled friction force is mainly characterized by a changing-slowly part,thus a simple adaptive law can be employed to learn this changing-slowly part and compensate it.To guarantee the boundedness of adaptation process,a discontinuous projection is utilized and then a robust scheme is proposed.The controller achieves a prescribed output tracking transient performance and final tracking accuracy in general while obtaining asymptotic output tracking in the absence of modeling errors.In addition,a saturated projection adaptive scheme is proposed to improve the globally learning capability when the velocity becomes large,which might make the previous proposed projection-based adaptive law be unstable.Theoretical and extensive experimental results are obtained to verify the high-performance nature of the proposed adaptive robust control strategy.
Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations
Calo, Victor M.
2011-05-14
In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.
Two-stage re-estimation adaptive design: a simulation study
Directory of Open Access Journals (Sweden)
Francesca Galli
2013-10-01
Full Text Available Background: adaptive clinical trial design has been proposed as a promising new approach to improve the drug discovery process. Among the many options available, adaptive sample size re-estimation is of great interest mainly because of its ability to avoid a large ‘up-front’ commitment of resources. In this simulation study, we investigate the statistical properties of two-stage sample size re-estimation designs in terms of type I error control, study power and sample size, in comparison with the fixed-sample study.Methods: we simulated a balanced two-arm trial aimed at comparing two means of normally distributed data, using the inverse normal method to combine the results of each stage, and considering scenarios jointly defined by the following factors: the sample size re-estimation method, the information fraction, the type of group sequential boundaries and the use of futility stopping. Calculations were performed using the statistical software SAS™ (version 9.2.Results: under the null hypothesis, any type of adaptive design considered maintained the prefixed type I error rate, but futility stopping was required to avoid the unwanted increase in sample size. When deviating from the null hypothesis, the gain in power usually achieved with the adaptive design and its performance in terms of sample size were influenced by the specific design options considered.Conclusions: we show that adaptive designs incorporating futility stopping, a sufficiently high information fraction (50-70% and the conditional power method for sample size re-estimation have good statistical properties, which include a gain in power when trial results are less favourable than anticipated.
Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.
2016-10-01
In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.
Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej
2016-08-09
The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to supramolecular CG models, where several molecules are represented as a single CG bead, is challenging, but it provides higher computational gains and connection to the established MARTINI CG force field. Difficulties that arise from such coupling have been so far bypassed with bundled AT water models, where additional harmonic bonds between oxygen atoms within a given supramolecular water bundle are introduced. While these models simplify the supramolecular coupling, they also cause in certain situations spurious artifacts, such as partial unfolding of biomolecules. In this work, we present a new clustering algorithm SWINGER that can concurrently make, break, and remake water bundles and in conjunction with the AdResS permits the use of original AT water models. We apply our approach to simulate a hybrid SPC/MARTINI water system and show that the essential properties of water are correctly reproduced with respect to the standard monoscale simulations. The developed hybrid water model can be used in biomolecular simulations, where a significant speed up can be obtained without compromising the accuracy of the AT water model.
Effectively explore metastable states of proteins by adaptive nonequilibrium driving simulations
Wan, Biao; Xu, Shun; Zhou, Xin
2017-03-01
Nonequilibrium drivings applied in molecular dynamics (MD) simulations can efficiently extend the visiting range of protein conformations, but might compel systems to go far away from equilibrium and thus mainly explore irrelevant conformations. Here we propose a general method, called adaptive nonequilibrium simulation (ANES), to automatically adjust the external driving on the fly, based on the feedback of the short-time average response of system. Thus, the ANES approximately keeps the local equilibrium but efficiently accelerates the global motion. We illustrate the capability of the ANES in highly efficiently exploring metastable conformations in the deca-alanine peptide and find that the 0.2 -μ s ANES approximately captures the important states and folding and unfolding pathways in the HP35 solution by comparing with the result of the recent 398 -μ s equilibrium MD simulation on Anton [S. Piana et al., Proc. Natl. Acad. Sci. USA 109, 17845 (2012), 10.1073/pnas.1201811109].
Online body schema adaptation based on internal mental simulation and multisensory feedback
Directory of Open Access Journals (Sweden)
Pedro eVicente
2016-03-01
Full Text Available In this paper, we describe a novel approach to obtain automatic adaptation of the robot body schema and to improve the robot perceptual and motor skills based on this body knowledge. Predictions obtained through a mental simulation of the body are combined with the real sensory feedback to achieve two objectives simultaneously: body schema adaptation and markerless 6D hand pose estimation. The body schema consists of a computer graphics simulation of the robot, which includes the arm and head kinematics (adapted online during the movements and an appearance model of the hand shape and texture. The mental simulation process generates predictions on how the hand will appear in the robot camera images, based on the body schema and the proprioceptive information (i.e. motor encoders. These predictions are compared to the actual images using Sequential Monte Carlo techniques to feed a particle-based Bayesian estimation method to estimate the parameters of the body schema. The updated body schema will improve the estimates of the 6D hand pose, which is thenused in a closed-loop control scheme (i.e. visual servoing, enabling precise reaching. We report experiments with the iCub humanoid robot that support the validity of our approach. A number of simulations with precise ground-truth were performed to evaluate the estimation capabilities of the proposed framework. Then, we show how the use of high-performance GPU programming and an edge-based algorithm for visual perception allow for real-time implementation in real world scenarios.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
A Hamiltonian theory of adaptive resolution simulations of classical and quantum models of nuclei
Kreis, Karsten; Donadio, Davide; Kremer, Kurt; Potestio, Raffaello
2015-03-01
Quantum delocalization of atomic nuclei strongly affects the physical properties of low temperature systems, such as superfluid helium. However, also at room temperature nuclear quantum effects can play an important role for molecules composed by light atoms. An accurate modeling of these effects is possible making use of the Path Integral formulation of Quantum Mechanics. In simulations, this numerically expensive description can be restricted to a small region of space, while modeling the remaining atoms as classical particles. In this way the computational resources required can be significantly reduced. In the present talk we demonstrate the derivation of a Hamiltonian formulation for a bottom-up, theoretically solid coupling between a classical model and a Path Integral description of the same system. The coupling between the two models is established with the so-called Hamiltonian Adaptive Resolution Scheme, resulting in a fully adaptive setup in which molecules can freely diffuse across the classical and the Path Integral regions by smoothly switching their description on the fly. Finally, we show the validation of the approach by means of adaptive resolution simulations of low temperature parahydrogen. Graduate School Materials Science in Mainz, Staudinger Weg 9, 55128 Mainz, Germany.
Directory of Open Access Journals (Sweden)
Joshua Rodewald
2016-10-01
Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.
Directory of Open Access Journals (Sweden)
Jana Hoymann
2016-06-01
Full Text Available Decision-makers in the fields of urban and regional planning in Germany face new challenges. High rates of urban sprawl need to be reduced by increased inner-urban development while settlements have to adapt to climate change and contribute to the reduction of greenhouse gas emissions at the same time. In this study, we analyze conflicts in the management of urban areas and develop integrated sustainable land use strategies for Germany. The spatial explicit land use change model Land Use Scanner is used to simulate alternative scenarios of land use change for Germany for 2030. A multi-criteria analysis is set up based on these scenarios and based on a set of indicators. They are used to measure whether the mitigation and adaptation objectives can be achieved and to uncover conflicts between these aims. The results show that the built-up and transport area development can be influenced both in terms of magnitude and spatial distribution to contribute to climate change mitigation and adaptation. Strengthening the inner-urban development is particularly effective in terms of reducing built-up and transport area development. It is possible to reduce built-up and transport area development to approximately 30 ha per day in 2030, which matches the sustainability objective of the German Federal Government for the year 2020. In the case of adaptation to climate change, the inclusion of extreme flood events in the context of spatial planning requirements may contribute to a reduction of the damage potential.
Long-time simulations of the Kelvin-Helmholtz instability using an adaptive vortex method.
Sohn, Sung-Ik; Yoon, Daeki; Hwang, Woonjae
2010-10-01
The nonlinear evolution of an interface subject to a parallel shear flow is studied by the vortex sheet model. We perform long-time computations for the vortex sheet in density-stratified fluids by using the point vortex method and investigate late-time dynamics of the Kelvin-Helmholtz instability. We apply an adaptive point insertion procedure and a high-order shock-capturing scheme to the vortex method to handle the nonuniform distribution of point vortices and enhance the resolution. Our adaptive vortex method successfully simulates chaotically distorted interfaces of the Kelvin-Helmholtz instability with fine resolutions. The numerical results show that the Kelvin-Helmholtz instability evolves a secondary instability at a late time, distorting the internal rollup, and eventually develops to a disordered structure.
Adaptive and Iterative Methods for Simulations of Nanopores with the PNP-Stokes Equations
Mitscha-Baude, Gregor; Tulzer, Gerhard; Heitzinger, Clemens
2016-01-01
We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equ...
Simulation and analysis of a Truck Model's ride comfort based on fuzzy adaptive control theory
Institute of Scientific and Technical Information of China (English)
JIANG Li-biao; WANG Deng-feng; NI Qiang; TAN Wei-ming
2007-01-01
This paper tried to analyse and verify the fuzzy adaptive control strategy of electronic control air suspension system for heavy truck. Created the seven-freedoms vehicle suspension model, and the road input model; with Matlab/Simulink toolboxes and modules, built dynamical system simulation model for heavy truck with air suspension, fuzzy adaptive control model, height control model for air spring, and intelligent control and analyse on root mean square value of acceleration of gravity center of the vehicle under excitation of road. Results show that the fuzzy control had less help to the body vibration on the better pavement, but had the better benefit on the bad road, and the vehicle's root mean square value of acceleration of gravity center is less than passive suspension's obviously.
Kreis, K.; Fogarty, A. C.; Kremer, K.; Potestio, R.
2015-09-01
In adaptive resolution simulations, molecular fluids are modeled employing different levels of resolution in different subregions of the system. When traveling from one region to the other, particles change their resolution on the fly. One of the main advantages of such approaches is the computational efficiency gained in the coarse-grained region. In this respect the best coarse-grained system to employ in the low resolution region would be the ideal gas, making intermolecular force calculations in the coarse-grained subdomain redundant. In this case, however, a smooth coupling is challenging due to the high energetic imbalance between typical liquids and a system of non-interacting particles. In the present work, we investigate this approach, using as a test case the most biologically relevant fluid, water. We demonstrate that a successful coupling of water to the ideal gas can be achieved with current adaptive resolution methods, and discuss the issues that remain to be addressed.
Quirós-Pacheco, Fernando; Agapito, Guido; Riccardi, Armando; Esposito, Simone; Le Louarn, Miska; Marchetti, Enrico
2012-07-01
This paper presents the performance analysis based on numerical simulations of the Pyramid Wavefront sensor Module (PWM) to be included in ERIS, the new Adaptive Optics (AO) instrument for the Adaptive Optics Facility (AOF). We have analyzed the performance of the PWM working either in a low-order or in a high-order wavefront sensing mode of operation. We show that the PWM in the high-order sensing mode can provide SR > 90% in K band using bright guide stars under median seeing conditions (0.85 arcsec seeing and 15 m/s of wind speed). In the low-order sensing mode, the PWM can sense and correct Tip-Tilt (and if requested also Focus mode) with the precision required to assist the LGS observations to get an SR > 60% and > 20% in K band, using up to a ~16.5 and ~19.5 R-magnitude guide star, respectively.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
Binocular adaptive optics visual simulator: understanding the impact of aberrations on actual vision
Fernández, Enrique J.; Prieto, Pedro M.; Artal, Pablo
2010-02-01
A novel adaptive optics system is presented for the study of vision. The apparatus is capable for binocular operation. The binocular adaptive optics visual simulator permits measuring and manipulating ocular aberrations of the two eyes simultaneously. Aberrations can be corrected, or modified, while the subject performs visual testing under binocular vision. One of the most remarkable features of the apparatus consists on the use of a single correcting device, and a single wavefront sensor (Hartmann-Shack). Both the operation and the total cost of the instrument largely benefit from this attribute. The correcting device is a liquid-crystal-on-silicon (LCOS) spatial light modulator. The basic performance of the visual simulator consists in the simultaneous projection of the two eyes' pupils onto both the corrector and sensor. Examples of the potential of the apparatus for the study of the impact of the aberrations under binocular vision are presented. Measurements of contrast sensitivity with modified combinations of spherical aberration through focus are shown. Special attention was paid on the simulation of monovision, where one eye is corrected for far vision while the other is focused at near distance. The results suggest complex binocular interactions. The apparatus can be dedicated to the better understanding of the vision mechanism, which might have an important impact in developing new protocols and treatments for presbyopia. The technique and the instrument might contribute to search optimized ophthalmic corrections.
Adaptations to isolated shoulder fatigue during simulated repetitive work. Part I: Fatigue.
Tse, Calvin T F; McDonald, Alison C; Keir, Peter J
2016-08-01
Upper extremity muscle fatigue is challenging to identify during industrial tasks and places changing demands on the shoulder complex that are not fully understood. The purpose of this investigation was to examine adaptation strategies in response to isolated anterior deltoid muscle fatigue while performing simulated repetitive work. Participants completed two blocks of simulated repetitive work separated by an anterior deltoid fatigue protocol; the first block had 20 work cycles and the post-fatigue block had 60 cycles. Each work cycle was 60s in duration and included 4 tasks: handle pull, cap rotation, drill press and handle push. Surface EMG of 14 muscles and upper body kinematics were recorded. Immediately following fatigue, glenohumeral flexion strength was reduced, rating of perceived exertion scores increased and signs of muscle fatigue (increased EMG amplitude, decreased EMG frequency) were present in anterior and posterior deltoids, latissimus dorsi and serratus anterior. Along with other kinematic and muscle activity changes, scapular reorientation occurred in all of the simulated tasks and generally served to increase the width of the subacromial space. These findings suggest that immediately following fatigue people adapt by repositioning joints to maintain task performance and may also prioritize maintaining subacromial space width.
Parkinson, S. D.; Hill, J.; Piggott, M. D.; Allison, P. A.
2014-09-01
High-resolution direct numerical simulations (DNSs) are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier-Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE) DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.
Directory of Open Access Journals (Sweden)
S. D. Parkinson
2014-09-01
Full Text Available High-resolution direct numerical simulations (DNSs are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier–Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.
Yoo, Jin-Hyeong; Murugan, Muthuvel; Wereley, Norman M.
2013-04-01
This study investigates a lumped-parameter human body model which includes lower leg in seated posture within a quarter-car model for blast injury assessment simulation. To simulate the shock acceleration of the vehicle, mine blast analysis was conducted on a generic land vehicle crew compartment (sand box) structure. For the purpose of simulating human body dynamics with non-linear parameters, a physical model of a lumped-parameter human body within a quarter car model was implemented using multi-body dynamic simulation software. For implementing the control scheme, a skyhook algorithm was made to work with the multi-body dynamic model by running a co-simulation with the control scheme software plug-in. The injury criteria and tolerance levels for the biomechanical effects are discussed for each of the identified vulnerable body regions, such as the relative head displacement and the neck bending moment. The desired objective of this analytical model development is to study the performance of adaptive semi-active magnetorheological damper that can be used for vehicle-occupant protection technology enhancements to the seat design in a mine-resistant military vehicle.
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Energy Technology Data Exchange (ETDEWEB)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)
2015-05-21
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Phase-field simulation of dendritic solidification using a full threaded tree with adaptive meshing
Institute of Scientific and Technical Information of China (English)
Yin Yajun; Zhou Jianxin; Liao Dunming; Pang Shengyong; Shen Xu
2014-01-01
Simulation of the microstructure evolution during solidification is greatly beneficial to the control of solidification microstructures. A phase-field method based on the ful threaded tree (FTT) for the simulation of casting solidification microstructure was proposed in this paper, and the structure of the ful threaded tree and the mesh refinement method was discussed. During dendritic growth in solidification, the mesh for simulation is adaptively refined at the liquid-solid interface, and coarsened in other areas. The numerical results of a three-dimension dendrite growth indicate that the phase-field method based on FTT is suitable for microstructure simulation. Most importantly, the FTT method can increase the spatial and temporal resolutions beyond the limits imposed by the available hardware compared with the conventional uniform mesh. At the simulation time of 0.03 s in this study, the computer memory used for computation is no more than 10 MB with the FTT method, while it is about 50 MB with the uniform mesh method. In addition, the proposed FTT method is more efficient in computation time when compared with the uniform mesh method. It would take about 20 h for the uniform mesh method, while only 2 h for the FTT method for computation when the solidification time is 0.17 s in this study.
Adaptive life simulator: A novel approach to modeling the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Kangas, L.J.; Keller, P.E.; Hashem, S. [and others
1995-06-01
In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Energy Technology Data Exchange (ETDEWEB)
Timofeev, E.V.; Tahir, R.B. [McGill Univ., Dept. of Mechanical Engineering, Montreal, Quebec (Canada)]. E-mail: evgeny.timofeev@mcgill.ca; Voinovich, P.A. [A.F. Ioffe Physical-Technical Inst., St. Petersburg Branch of the Joint Supercomputer Center, St. Petersburg (Russian Federation); Moelder, S. [Ryerson Polytechnic Univ., Toronto, Ontario (Canada)
2004-07-01
The concept of 'twin' grid nodes is discussed in the context of unstructured, adaptive meshes that are suitable for highly unsteady flows. The concept is applicable to internal boundary contours (within the computational domain) where the boundary conditions may need to be changed dynamically; for instance, an impermeable solid wall segment can be redefined as a fully permeable invisible boundary segment during the course of the simulation. This can be used to simulate unsteady gas flows with internal boundaries where the flow conditions may change rapidly and drastically. As a demonstration, the idea is applied to study the starting process in hypersonic air inlets by rupturing a diaphragm or by opening wall-perforations. (author)
A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides
de la Asunción, Marc; Castro, Manuel J.
2016-04-01
In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.
Simulation of a ground-layer adaptive optics system for the Kunlun Dark Universe Survey Telescope
Institute of Scientific and Technical Information of China (English)
Peng Jia; Sijiong Zhang
2013-01-01
Ground Layer Adaptive Optics (GLAO) is a recently developed technique extensively applied to ground-based telescopes,which mainly compensates for the wavefront errors induced by ground-layer turbulence to get an appropriate point spread function in a wide field of view.The compensation results mainly depend on the turbulence distribution.The atmospheric turbulence at Dome A in the Antarctic is mainly distributed below 15 meters,which is an ideal site for applications of GLAO.The GLAO system has been simulated for the Kunlun Dark Universe Survey Telescope,which will be set up at Dome A,and uses a rotating mirror to generate several laser guide stars and a wavefront sensor with a wide field of view to sequentially measure the wavefronts from different laser guide stars.The system is simulated on a computer and parameters of the system are given,which provide detailed information about the design of a practical GLAO system.
Adaptive learning in agents behaviour: A framework for electricity markets simulation
DEFF Research Database (Denmark)
Pinto, Tiago; Vale, Zita; Sousa, Tiago M.
2014-01-01
that combines several distinct strategies to build actions proposals, so that the best can be chosen at each time, depending on the context and simulation circumstances. The choosing process includes reinforcement learning algorithms, a mechanism for negotiating contexts analysis, a mechanism for the management...... players and simulates their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. This paper presents a methodology to provide decision support to electricity market negotiating players. This model...... allows integrating different strategic approaches for electricity market negotiations, and choosing the most appropriate one at each time, for each different negotiation context. This methodology is integrated in ALBidS (Adaptive Learning strategic Bidding System) – a multiagent system that provides...
Vogel, Thomas
2015-01-01
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.
基于二进制蚁群模拟退火算法的认知引擎%Cognitive engine based on binary ant colony simulated annealing algorithm
Institute of Scientific and Technical Information of China (English)
夏龄; 冯文江
2012-01-01
在认知无线电系统中,认知引擎依据通信环境的变化和用户需求动态配置无线电工作参数.针对认知引擎中的智能优化问题,提出一种二进制蚁群模拟退火(BAC&SA)算法用于认知无线电参数优化.该算法在二进制蚁群优化(BACO)算法中引入模拟退火(SA)算法,融合了BACO的快速寻优能力和SA的概率突跳特性,能有效避免BACO容易陷入局部最优解的缺陷.仿真实验结果表明,与遗传算法(GA)和BACO算法相比,基于BAC&SA算法的认知引擎在全局搜索能力和平均适应度等方面具有明显的优势.%In cognitive radio system, cognitive engine can dynamically configure its working parameters according to the changes of communication environment and users' requirement. Intelligent optimization algorithm of cognitive engine had been studied, and a Binary Ant Colony Simulated Annealing ( BAC&SA) algorithm was proposed for parameters optimization of cognitive radio system. The new algorithm, which introduced the Simulated Annealing (SA) algorithm into the Binary Ant Colony Optimization ( BACO) algorithm, combined the rapid optimization ability of BACO with probability jumping property of SA, and effectively avoided the defect of falling into local optimization result of BACO. The simulation results show that cognitive engine based on BAC&SA algorithm has considerable advantage over GA and BACO algorithm in the global search ability and average fitness.
ADRC or adaptive controller--A simulation study on artificial blood pump.
Wu, Yi; Zheng, Qing
2015-11-01
Active disturbance rejection control (ADRC) has gained popularity because it requires little knowledge about the system to be controlled, has the inherent disturbance rejection ability, and is easy to tune and implement in practical systems. In this paper, the authors compared the performance of an ADRC and an adaptive controller for an artificial blood pump for end-stage congestive heart failure patients using only the feedback signal of pump differential pressure. The purpose of the control system was to provide sufficient perfusion when the patients' circulation system goes through different pathological and activity variations. Because the mean arterial pressure is equal to the total peripheral flow times the total peripheral resistance, this goal was converted to an expression of making the mean aortic pressure track a reference signal. The simulation results demonstrated that the performance of the ADRC is comparable to that of the adaptive controller with the saving of modeling and computational effort and fewer design parameters: total peripheral flow and mean aortic pressure with ADRC fall within the normal physiological ranges in activity variation (rest to exercise) and in pathological variation (left ventricular strength variation), similar to those values of adaptive controller.
Modified simulated annealing algorithm for flexible job-shop scheduling problem%柔性作业车间调度优化的改进模拟退火算法
Institute of Scientific and Technical Information of China (English)
李俊; 刘志雄; 张煜; 贺晶晶
2015-01-01
A modified simulated annealing algorithm was put forward to resolve the flexible job‐shop scheduling problem ,which used two kinds of individual encoding method respectively based on parti‐cle position rounding and roulette probability assignment in particle swarm algorithm .Three different local search methods were employed to constitute the neighborhood structure .The computational re‐sults show that the modified simulated annealing algorithm is more effective than particle swarm algo‐rithm ,hybrid particle swarm algorithm and simulated annealing algorithm in resolving the flexible job‐shop scheduling problem .Compared with the position rounding encoding method ,the roulette‐probability‐assignment‐based encoding method can render the algorithm more effective ,and the local search method based on crossing‐over operation is better than the other two search methods in impro‐ving the solving performance of the algorithm .%针对柔性作业车间调度问题，提出一种改进模拟退火算法来进行求解。该算法引入粒子群算法中的基于位置取整和基于轮盘赌两种个体编码方法，并采用3种不同的局部搜索方法来构造个体的邻域结构。算例计算表明，改进模拟退火算法在求解柔性作业车间调度问题时，比粒子群算法、混合粒子群算法以及模拟退火算法具有更好的求解性能，其中采用轮盘赌编码时，算法的求解性能要优于采用位置取整时的求解性能，且基于互换的局部搜索方法要优于其他两种局部搜索方法，能更有效地改善算法的求解性能。
Simulating spatial adaption of groundwater pumping on seawater intrusion in coastal regions
Grundmann, Jens; Ladwig, Robert; Schütze, Niels; Walther, Marc
2016-04-01
Coastal aquifer systems are used intensively to meet the growing demands for water in those regions. They are especially at risk for the intrusion of seawater due to aquifer overpumping, limited groundwater replenishment and unsustainable groundwater management which in turn also impacts the social and economical development of coastal regions. One example is the Al-Batinah coastal plain in northern Oman where irrigated agriculture is practiced by lots of small scaled farms in different distances from the sea, each of them pumping their water from coastal aquifer. Due to continuous overpumping and progressing saltwater intrusion farms near the coast had to close since water for irrigation got too saline. For investigating appropriate management options numerical density dependent groundwater modelling is required which should also portray the adaption of groundwater abstraction schemes on the water quality. For addressing this challenge a moving inner boundary condition is implemented in the numerical density dependent groundwater model which adjusts the locations for groundwater abstraction according to the position of the seawater intrusion front controlled by thresholds of relative chloride concentration. The adaption process is repeated for each management cycle within transient model simulations and allows for considering feedbacks with the consumers e.g. the agriculture by moving agricultural farms more inland or towards the sea if more fertile soils at the coast could be recovered. For finding optimal water management strategies efficiently, the behaviour of the numerical groundwater model for different extraction and replenishment scenarios is approximated by an artificial neural network using a novel approach for state space surrogate model development. Afterwards the derived surrogate is coupled with an agriculture module within a simulation based water management optimisation framework to achieve optimal cropping pattern and water abstraction schemes
Buntemeyer, Lars; Banerjee, Robi; Peters, Thomas; Klassen, Mikhail; Pudritz, Ralph E.
2016-02-01
We present an algorithm for solving the radiative transfer problem on massively parallel computers using adaptive mesh refinement and domain decomposition. The solver is based on the method of characteristics which requires an adaptive raytracer that integrates the equation of radiative transfer. The radiation field is split into local and global components which are handled separately to overcome the non-locality problem. The solver is implemented in the framework of the magneto-hydrodynamics code FLASH and is coupled by an operator splitting step. The goal is the study of radiation in the context of star formation simulations with a focus on early disc formation and evolution. This requires a proper treatment of radiation physics that covers both the optically thin as well as the optically thick regimes and the transition region in particular. We successfully show the accuracy and feasibility of our method in a series of standard radiative transfer problems and two 3D collapse simulations resembling the early stages of protostar and disc formation.
Adaptive multi-stage integrators for optimal energy conservation in molecular simulations
Fernández-Pendás, Mario; Akhmatskaya, Elena; Sanz-Serna, J. M.
2016-12-01
We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realizes the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrators. In particular, for the systems where harmonic forces play an important role, the sampling efficiency found in simulations using the AIA is up to 5 times better than the one achieved with other tested schemes.
Ltaief, Hatem
2016-06-02
We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used as an operational testbed for simulating the de- sign of new instruments for the European Extremely Large Telescope project (E-ELT), the world\\'s biggest eye and one of Europe\\'s highest priorities in ground-based astronomy. The simulation corresponds to a multi-step multi-stage pro- cedure, which is fed, near real-time, by system and turbulence data coming from the telescope environment. Based on the PLASMA library powered by the OmpSs dynamic runtime system, our implementation relies on a task-based programming model to permit an asynchronous out-of-order execution. Using modern multicore architectures associated with the enormous computing power of GPUS, the resulting data-driven compute-intensive simulation of the entire MOAO application, composed of the tomographic reconstructor and the observing sequence, is capable of coping with the aforementioned real-time challenge and stands as a reference implementation for the computational astronomy community.
An Overview of Approaches to Modernize Quantum Annealing Using Local Searches
Directory of Open Access Journals (Sweden)
Nicholas Chancellor
2016-06-01
Full Text Available I describe how real quantum annealers may be used to perform local (in state space searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm. The quantum annealing algorithm is an analogue of simulated annealing, a classical numerical technique which is now obsolete. Hence, I explore strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms, and additionally should be less sensitive to problem mis-specification then the traditional quantum annealing algorithm.
A two-dimensional adaptive spectral element method for the direct simulation of incompressible flow
Hsu, Li-Chieh
The spectral element method is a high order discretization scheme for the solution of nonlinear partial differential equations. The method draws its strengths from the finite element method for geometrical flexibility and spectral methods for high accuracy. Although the method is, in theory, very powerful for complex phenomena such as transitional flows, its practical implementation is limited by the arbitrary choice of domain discretization. For instance, it is hard to estimate the appropriate number of elements for a specific case. Selection of regions to be refined or coarsened is difficult especially as the flow becomes more complex and memory limits of the computer are stressed. We present an adaptive spectral element method in which the grid is automatically refined or coarsened in order to capture underresolved regions of the domain and to follow regions requiring high resolution as they develop in time. The objective is to provide the best and most efficient solution to a time-dependent nonlinear problem by continually optimizing resource allocation. The adaptivity is based on an error estimator which determines which regions need more resolution. The solution strategy is as follows: compute an initial solution with a suitable initial mesh, estimate errors in the solution locally in each element, modify the mesh according to the error estimators, interpolate old mesh solutions onto the new elements, and resume the numerical solution process. A two-dimensional adaptive spectral element method for the direct simulation of incompressible flows has been developed. The adaptive algorithm effectively diagnoses and refines regions of the flow where complexity of the solution requires increased resolution. The method has been demonstrated on two-dimensional examples in heat conduction, Stokes and Navier-Stokes flows.
An adaptive multi-level simulation algorithm for stochastic biological systems.
Lester, C; Yates, C A; Giles, M B; Baker, R E
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
Deiterding, Ralf; Wood, Stephen L.
2015-11-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed the first version of a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The presentation will describe the employed algorithms and present relevant verification and validation computations. For instance, power and thrust coefficients of a Vestas V27 turbine are predicted within 5% of the manufacturer's specifications. Simulations of three Vestas V27-225kW turbines in triangular arrangement analyze the reduction in power production due to upstream wake generation for different inflow conditions.
Adaptive multi-stage integrators for optimal energy conservation in molecular simulations
Fernández-Pendás, Mario; Sanz-Serna, J M
2015-01-01
We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realises the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrator...
Hummels, Cameron
2011-01-01
We carry out adaptive mesh refinement (AMR) cosmological simulations of Milky-Way mass halos in order to investigate the formation of disk-like galaxies in a {\\Lambda}-dominated Cold Dark Matter model. We evolve a suite of five halos to z = 0 and find gaseous-disk formation in all; however, in agreement with previous SPH simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (i) spatial resolution, ranging from 1700 to 212 pc; (ii) an additional pressure component to ensure that the Jeans length is always resolved; (iii) low star formation efficiency, going down to 0.1%; (iv) fixed physical resolution as opposed to comoving resolution; (v) a supernova feedback model which injects thermal energy to the local cel...
Precision in ground based solar polarimetry: Simulating the role of adaptive optics
Nagaraju, K
2012-01-01
Accurate measurement of polarization in spectral lines is important for the reliable inference of magnetic fields on the Sun. For ground based observations, polarimetric precision is severely limited by the presence of Earth's atmosphere. Atmospheric turbulence (seeing) produces signal fluctuations which combined with the non-simultaneous nature of the measurement process cause intermixing of the Stokes parameters known as seeing induced polarization cross-talk. Previous analysis of this effect (Judge et al., 2004) suggests that cross-talk is reduced not only with increase in modulation frequency but also by compensating the seeing induced image aberrations by an Adaptive Optics (AO) system. However, in those studies the effect of higher order image aberrations than those corrected by the AO system was not taken into account. We present in this paper an analysis of seeing induced cross-talk in the presence of higher order image aberrations through numerical simulation. In this analysis we find that the amount...
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Klassen, Mikhail; Pudritz, Ralph E; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-01-01
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodynamics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion (FLD) solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calc...
Multi-Objective Memetic Algorithm for FPGA Placement Using Parallel Genetic Annealing
Directory of Open Access Journals (Sweden)
Praveen T.
2016-04-01
Full Text Available Due to advancement in reconfigurable computing, Field Programmable Gate Array (FPGA has gained significance due to its low cost and fast prototyping. Parallelism, specialization, and hardware level adaptation, are the key features of reconfigurable computing. FPGA is a programmable chip that can be configured or reconfigured by the designer, to implement any digital circuit. One major challenge in FPGA design is the Placement problem. In this placement phase, the logic functions are assigned to specific cells of the circuit. The quality of the placement of the logic blocks determines the overall performance of the logic implemented in the circuits. The Placement of FPGA is a Multi-Objective Optimization problem that primarily involves minimization of three or more objective functions. In this paper, we propose a novel strategy to solve the FPGA placement problem using Non-dominated Sorting Genetic Algorithm (NSGA-II and Simulated Annealing technique. Experiments were conducted in Multicore Processors and metrics such as CPU time were measured to test the efficiency of the proposed algorithm. From the experimental results, it is evident that the proposed algorithm reduces the CPU consumption time to an average of 15% as compared to the Genetic Algorithm, 12% as compared to the Simulated Annealing, and approximately 6% as compared to the Genetic Annealing algorithm.
Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment
Directory of Open Access Journals (Sweden)
Yao Yao
2016-12-01
Full Text Available We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.
Energy Technology Data Exchange (ETDEWEB)
Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.
2007-10-29
We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.
Directory of Open Access Journals (Sweden)
Abdulnaser M. Alshoaibi
2009-01-01
Full Text Available The purpose of this study is on the determination of 2D crack paths and surfaces as well as on the evaluation of the stress intensity factors as a part of the damage tolerant assessment. Problem statement: The evaluation of SIFs and crack tip singular stresses for arbitrary fracture structure are a challenging problem, involving the calculation of the crack path and the crack propagation rates at each step especially under mixed mode loading. Approach: This study was provided a finite element code which produces results comparable to the current available commercial software. Throughout the simulation of crack propagation an automatic adaptive mesh was carried out in the vicinity of the crack front nodes and in the elements which represent the higher stresses distribution. The finite element mesh was generated using the advancing front method. The adaptive remising process carried out based on the posteriori stress error norm scheme to obtain an optimal mesh. The onset criterion of crack propagation was based on the stress intensity factors which provide as the most important parameter that must be accurately estimated. Facilitated by the singular elements, the displacement extrapolation technique is employed to calculate the stress intensity factor. Crack direction is predicted using the maximum circumferential stress theory. The fracture was modeled by the splitting node approach and the trajectory follows the successive linear extensions of each crack increment. The propagation process is driven by Linear Elastic Fracture Mechanics (LEFM approach with minimum user interaction. Results: In evaluating the accuracy of the estimated stress intensity factors and the crack path predictions, the results were compared with sets of experimental data, benchmark analytical solutions as well as numerical results of other researchers. Conclusion/Recommendations: The assessment indicated that the program was highly reliable to evaluate the stress intensity
Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert
Energy Technology Data Exchange (ETDEWEB)
Efthimiou, Nikos [Technological Educational Institute of Athens (Greece); Kostou, Theodora; Papadimitroulas, Panagiotis [Technological Educational Institute of Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras (Greece); Charalampos, Tsoumpas [Division of Biomedical Imaging, University of Leeds, Leeds (United Kingdom); Loudos, George [Technological Educational Institute of Athens (Greece)
2015-05-18
Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.
Hase, Chris
2010-01-01
In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.
Directory of Open Access Journals (Sweden)
S. D. Parkinson
2014-05-01
Full Text Available High resolution direct numerical simulations (DNS are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier–Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two, and three-dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring mesh performance in capturing the range of dynamics. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. Use of discontinuous discretisations and adaptive unstructured meshing technologies, which reduce the required element count by approximately two orders of magnitude, results in high resolution DNS models of turbidity currents at a fraction of the cost of traditional FE models. The benefits of this technique will enable simulation of turbidity currents in complex and large domains where DNS modelling was previously unachievable.
The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture
Azdoud, Yan
2014-04-19
We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain. © 2014 Springer-Verlag Berlin Heidelberg.
Fast simulation of transport and adaptive permeability estimation in porous media
Energy Technology Data Exchange (ETDEWEB)
Berre, Inga
2005-07-01
The focus of the thesis is twofold: Both fast simulation of transport in porous media and adaptive estimation of permeability are considered. A short introduction that motivates the work on these topics is given in Chapter 1. In Chapter 2, the governing equations for one- and two-phase flow in porous media are presented. Overall numerical solution strategies for the two-phase flow model are also discussed briefly. The concepts of streamlines and time-of-flight are introduced in Chapter 3. Methods for computing streamlines and time-of-flight are also presented in this chapter. Subsequently, in Chapters 4 and 5, the focus is on simulation of transport in a time-of-flight perspective. In Chapter 4, transport of fluids along streamlines is considered. Chapter 5 introduces a different viewpoint based on the evolution of isocontours of the fluid saturation. While the first chapters focus on the forward problem, which consists in solving a mathematical model given the reservoir parameters, Chapters 6, 7 and 8 are devoted to the inverse problem of permeability estimation. An introduction to the problem of identifying spatial variability in reservoir permeability by inversion of dynamic production data is given in Chapter 6. In Chapter 7, adaptive multiscale strategies for permeability estimation are discussed. Subsequently, Chapter 8 presents a level-set approach for improving piecewise constant permeability representations. Finally, Chapter 9 summarizes the results obtained in the thesis; in addition, the chapter gives some recommendations and suggests directions for future work. Part II In Part II, the following papers are included in the order they were completed: Paper A: A Streamline Front Tracking Method for Two- and Three-Phase Flow Including Capillary Forces. I. Berre, H. K. Dahle, K. H. Karlsen, and H. F. Nordhaug. In Fluid flow and transport in porous media: mathematical and numerical treatment (South Hadley, MA, 2001), volume 295 of Contemp. Math., pages 49
Directory of Open Access Journals (Sweden)
Cari Pérez-Vives
2014-04-01
Full Text Available Purpose: To compare optical and visual quality of implantable collamer lens (ICL implantation and femtosecond laser in situ keratomileusis (F-LASIK for myopia. Methods: The CRX1 adaptive optics visual simulator (Imagine Eyes, Orsay, France was used to simulate the wavefront aberration pattern after the two surgical procedures for -3-diopter (D and -6-D myopia. Visual acuity at different contrasts and contrast sensitivities at 10, 20, and 25 cycles/degree (cpd were measured for 3-mm and 5-mm pupils. The modulation transfer function (MTF and point spread function (PSF were calculated for 5-mm pupils. Results: F-LASIK MTF was worse than ICL MTF, which was close to diffraction-limited MTF. ICL cases showed less spread out of PSF than F-LASIK cases. ICL cases showed better visual acuity values than F-LASIK cases for all pupils, contrasts, and myopic treatments (p0.05. For -6-D myopia, however, statistically significant differences in contrast sensitivities were found for both pupils for all evaluated spatial frequencies (p<0.05. Contrast sensitivities were better after ICL implantation than after F-LASIK. Conclusions: ICL implantation and F-LASIK provide good optical and visual quality, although the former provides better outcomes of MTF, PSF, visual acuity, and contrast sensitivity, especially for cases with large refractive errors and pupil sizes. These outcomes are related to the F-LASIK producing larger high-order aberrations.
Adaptations to isolated shoulder fatigue during simulated repetitive work. Part II: Recovery.
McDonald, Alison C; Tse, Calvin T F; Keir, Peter J
2016-08-01
The shoulder allows kinematic and muscular changes to facilitate continued task performance during prolonged repetitive work. The purpose of this work was to examine changes during simulated repetitive work in response to a fatigue protocol. Participants performed 20 one-minute work cycles comprised of 4 shoulder centric tasks, a fatigue protocol, followed by 60 additional cycles. The fatigue protocol targeted the anterior deltoid and cycled between static and dynamic actions. EMG was collected from 14 upper extremity and back muscles and three-dimensional motion was captured during each work cycle. Participants completed post-fatigue work despite EMG manifestations of muscle fatigue, reduced flexion strength (by 28%), and increased perceived exertion (∼3 times). Throughout the post-fatigue work cycles, participants maintained performance via kinematic and muscular adaptations, such as reduced glenohumeral flexion and scapular rotation which were task specific and varied throughout the hour of simulated work. By the end of 60 post-fatigue work cycles, signs of fatigue persisted in the anterior deltoid and developed in the middle deltoid, yet perceived exertion and strength returned to pre-fatigue levels. Recovery from fatigue elicits changes in muscle activity and movement patterns that may not be perceived by the worker which has important implications for injury risk.
Effenberger, Frederic; Arnold, Lukas; Grauer, Rainer; Dreher, Jürgen
2011-01-01
The formation of a thin current sheet in a magnetic quasi-separatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-$\\beta$, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al., A&A 444, 961 (2005). In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limite...
The Numerical Simulation of Ship Waves Using Cartesian Grid Methods with Adaptive Mesh Refinement
Dommermuth, Douglas G; Beck, Robert F; O'Shea, Thomas T; Wyatt, Donald C; Olson, Kevin; MacNeice, Peter
2014-01-01
Cartesian-grid methods with Adaptive Mesh Refinement (AMR) are ideally suited for simulating the breaking of waves, the formation of spray, and the entrainment of air around ships. As a result of the cartesian-grid formulation, minimal input is required to describe the ships geometry. A surface panelization of the ship hull is used as input to automatically generate a three-dimensional model. No three-dimensional gridding is required. The AMR portion of the numerical algorithm automatically clusters grid points near the ship in regions where wave breaking, spray formation, and air entrainment occur. Away from the ship, where the flow is less turbulent, the mesh is coarser. The numerical computations are implemented using parallel algorithms. Together, the ease of input and usage, the ability to resolve complex free-surface phenomena, and the speed of the numerical algorithms provide a robust capability for simulating the free-surface disturbances near a ship. Here, numerical predictions, with and without AMR,...
Zeller, Fabian; Zacharias, Martin
2014-02-11
The accurate calculation of potentials of mean force for ligand-receptor binding is one of the most important applications of molecular simulation techniques. Typically, the separation distance between ligand and receptor is chosen as a reaction coordinate along which a PMF can be calculated with the aid of umbrella sampling (US) techniques. In addition, restraints can be applied on the relative position and orientation of the partner molecules to reduce accessible phase space. An approach combining such phase space reduction with flattening of the free energy landscape and configurational exchanges has been developed, which significantly improves the convergence of PMF calculations in comparison with standard umbrella sampling. The free energy surface along the reaction coordinate is smoothened by iteratively adapting biasing potentials corresponding to previously calculated PMFs. Configurations are allowed to exchange between the umbrella simulation windows via the Hamiltonian replica exchange method. The application to a DNA molecule in complex with a minor groove binding ligand indicates significantly improved convergence and complete reversibility of the sampling along the pathway. The calculated binding free energy is in excellent agreement with experimental results. In contrast, the application of standard US resulted in large differences between PMFs calculated for association and dissociation pathways. The approach could be a useful alternative to standard US for computational studies on biomolecular recognition processes.
Energy Technology Data Exchange (ETDEWEB)
Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Energy Technology Data Exchange (ETDEWEB)
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego, E-mail: fabio@ucolick.org [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
Directory of Open Access Journals (Sweden)
M. Ahmadlou
2015-12-01
Full Text Available Land use change (LUC models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS, and a global parametric model called artificial neural network (ANN to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM and 2010 (ETM+ were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Olynick, David P.; Hassan, H. A.; Moss, James N.
1988-01-01
A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.
Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT
Dannberg, Juliane; Heister, Timo
2016-12-01
Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.
Composition dependent thermal annealing behaviour of ion tracks in apatite
Energy Technology Data Exchange (ETDEWEB)
Nadzri, A., E-mail: allina.nadzri@anu.edu.au [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Schauries, D.; Mota-Santiago, P.; Muradoglu, S. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Trautmann, C. [GSI Helmholtz Centre for Heavy Ion Research, Planckstrasse 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, 64287 Darmstadt (Germany); Gleadow, A.J.W. [School of Earth Science, University of Melbourne, Melbourne, VIC 3010 (Australia); Hawley, A. [Australian Synchrotron, 800 Blackburn Road, Clayton, VIC 3168 (Australia); Kluth, P. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia)
2016-07-15
Natural apatite samples with different F/Cl content from a variety of geological locations (Durango, Mexico; Mud Tank, Australia; and Snarum, Norway) were irradiated with swift heavy ions to simulate fission tracks. The annealing kinetics of the resulting ion tracks was investigated using synchrotron-based small-angle X-ray scattering (SAXS) combined with ex situ annealing. The activation energies for track recrystallization were extracted and consistent with previous studies using track-etching, tracks in the chlorine-rich Snarum apatite are more resistant to annealing than in the other compositions.
Systematic testing of flood adaptation options in urban areas through simulations
Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten
2016-04-01
While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into
Evidence for quantum annealing with more than one hundred qubits
Boixo, Sergio; Rønnow, Troels F.; Isakov, Sergei V.; Wang, Zhihui; Wecker, David; Lidar, Daniel A.; Martinis, John M.; Troyer, Matthias
2014-03-01
Quantum technology is maturing to the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators may be built with capabilities exceeding classical computers. A quantum annealer, in particular, solves optimization problems by evolving a known initial configuration at non-zero temperature towards the ground state of a Hamiltonian encoding a given problem. Here, we present results from tests on a 108 qubit D-Wave One device based on superconducting flux qubits. By studying correlations we find that the device performance is inconsistent with classical annealing or that it is governed by classical spin dynamics. In contrast, we find that the device correlates well with simulated quantum annealing. We find further evidence for quantum annealing in the form of small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the device we compare it against optimized classical algorithms.
Institute of Scientific and Technical Information of China (English)
毛力; 刘兴阳; 沈明明
2011-01-01
In view of the advantages and disadvantages of K-harmonic means (KHM) and simulated annealing particle swarm optimization (SAPSO), a hybrid clustering algorithm combining KHM and SAPSO (KHM-SAPSO) was presented in this paper. With KHM, the particle swarm was divided into several sub-groups. Each particle iteratively updated its location based on its individual extreme value and the global extreme value of the sub-group it belonged to. With simulated annealing technique, the algorithm prevented premature convergence and improved the calculation accuracy. Using the databases of Iris, Zoo, Wine and Image Segmentation, and taking F-measure as a measure to evaluate the clustering effect, this paper qualified the new hybrid algorithm. Our experimental results indicated that the new algorithm significantly improved the clustering effectiveness by avoiding being trapped in local optimum, enhanced the global search capability while achieved faster convergence rate. This algorithm is adopted by an aquaculture water quality analysis system of a freshwater breeding base in Wuxi, which is running effectively.%针对K-调和均值和模拟退火粒子群聚类算法的优缺点,提出了1种融合K-调和均值和模拟退火粒子群的混合聚类算法.首先通过K-调和均值方法将粒子群分成若干个子群,每个粒子根据其个体极值和所在子种群的全局极值来更新位置.同时引入模拟退火思想,抑制了早期收敛,提高了计算精度.本文使用Iris、Zoo、Wine和Image Segmentation,4个数据库,以F-measure为评价聚类效果的标准,对混合聚类算法进行了验证.研究发现,该混合聚类算法可以有效地避免陷入局部最优,在保证收敛速度的同时增强了算法的全局搜索能力,明显改善了聚类效果.该算法目前已用于无锡一淡水养殖基地的水产健康养殖水质分析系统,运行效果良好.
Institute of Scientific and Technical Information of China (English)
陈雄峰; 吴景岚; 朱文兴
2014-01-01
A hybrid genetic simulated annealing algorithm is presented for solving the problem of VLSI standard cell placement with up to millions of cells. Firstly, to make genetic algorithm be capable of handling very large scale of standard cell placement, the strategies of small size population, dynamic updating population, and crossover localization are adopted, and the global search and local search of genetic algorithm are coordinated. Then, by introducing hill climbing ( HC) and simulated annealing ( SA) into the framework of genetic algorithm and the internal procedure of its operators, an effective crossover operator named Net Cycle Crossover and local search algorithms for the placement problem are designed to further improve the evolutionary efficiency of the algorithm and the quality of its placement results. In the algorithm procedure, HC method and SA method focus on array placement and non-array placement respectively. The experimental results on Peko suite3, Peko suite4 and ISPD04 benchmark circuits show that the proposed algorithm can handle array and non-array placements with 10,000 ~1,600,000 cells and 10,000~210,000 cells respectively, and can effectively improve the quality of placement results in a reasonable running time.%提出有效处理百万个VLSI标准单元布局问题的混合遗传模拟退火算法。首先采用小规模种群、动态更新种群和交叉局部化策略，并协调全局与局部搜索，使遗传算法可处理超大规模标准单元布局问题。然后为进一步提高算法进化效率和布局结果质量，将爬山和模拟退火方法引入遗传算法框架及其算子内部流程，设计高效的线网-循环交叉算子和局部搜索算法。标准单元阵列布局侧重使用爬山法，非阵列布局侧重使用模拟退火方法。 Peko suite3、Peko suite4和ISPD04标准测试电路的实验结果表明，该算法可在合理运行时间内有效提高布局结果质量。
一致性车辆路径问题下基于模板路径的模拟退火法%A Simulated Annealing Heuristic for the Consistent Vehicle Routing Problem
Institute of Scientific and Technical Information of China (English)
刘恒宇; 汝宜红
2015-01-01
根据一致性车辆路径问题的"服务一致性"特征,本文提出了基于模板路径的模拟退火法(TSA)以更好地求解此问题.该算法求解分为2个阶段:第1阶段求解模板路径,第2阶段以所得模板路径为参考获得各天车辆具体配送路径方案,2个阶段均采用模拟退火法进行优化.借助小、中规模基准数据集,文章对TSA算法进行数值实验,并将实验结果与ConRTR算法和TTS算法的结果作比较,利用TSA法求解一致性车辆路径问题得到的配送路径方案和"服务一致性"指标均得到优化.实验结果表明,运用TSA算法规划车辆配送路径方案,不仅能够降低运营成本,还能提高配送服务质量.%According to the"service consistency"characteristics of the consistent vehicle routing problem, a template-based simulated annealing heuristic (TSA) is proposed to look for better solutions, this algorithm can be divide into two stages: in the first stage, we get the template routes, and in the second stage, the template routes serve as a reference to determine the daily vehicle routing schedules. The simulated annealing heuristic is applied in both stages to get optimal solutions. Based on two small-and middle-scale benchmark data sets, numerical experiments are conducted to test the TSA and then the results are compared to ConRTR's and TTS'. It can be seen that both the"service consistency"indicators in these two experiments are improved. Therefore, the results prove that by using TSA to plan vehicle delivery routes, not only the operating cost is reduced but also higher service quality obtained.
Alliss, R.
2014-09-01
Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification
Rosenberg, Duane; Fournier, Aimé; Fischer, Paul; Pouquet, Annick
2006-06-01
An object-oriented geophysical and astrophysical spectral-element adaptive refinement (GASpAR) code is introduced. Like most spectral-element codes, GASpAR combines finite-element efficiency with spectral-method accuracy. It is also designed to be flexible enough for a range of geophysics and astrophysics applications where turbulence or other complex multiscale problems arise. The formalism accommodates both conforming and non-conforming elements. Several aspects of this code derive from existing methods, but here are synthesized into a new formulation of dynamic adaptive refinement (DARe) of non-conforming h-type. As a demonstration of the code, several new 2D test cases are introduced that have time-dependent analytic solutions and exhibit localized flow features, including the 2D Burgers equation with straight, curved-radial and oblique-colliding fronts. These are proposed as standard test problems for comparable DARe codes. Quantitative errors are reported for 2D spatial and temporal convergence of DARe.
Improved mapping of the travelling salesman problem for quantum annealing
Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David
2015-03-01
We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Institute of Scientific and Technical Information of China (English)
Jiang Bao-Guang; Cao Zhao-Liang; Mu Quan-Quan; Hu Li-Fa; Li Chao; Xuan Li
2008-01-01
In order to obtain a clear image of the retina of model eye, an adaptive optics system used to correct the wave-front error is introduced in this paper. The spatial light modulator that we use here is a liquid crystal on a silicon device instead of a conversional deformable mirror. A paper with carbon granule is used to simulate the retina of human eye. The pupil size of the model eye is adjustable (3-7 mm). A Shack-Hartman wave-front sensor is used to detect the wave-front aberration. With this construction, a value of peak-to-valley is achieved to be 0.086 λ, where A is wavelength.The modulation transfer functions before and after corrections are compared. And the resolution of this system after correction (691p/m) is very close to the diffraction limit resolution. The carbon granule on the white paper which has a size of 4.7μm is seen clearly. The size of the retina cell is between 4 and 10 μm. So this system has an ability to image the human eye's retina.
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Energy Technology Data Exchange (ETDEWEB)
Klassen, Mikhail; Pudritz, Ralph E. [Department of Physics and Astronomy, McMaster University 1280 Main Street W, Hamilton, ON L8S 4M1 (Canada); Kuiper, Rolf [Max Planck Institute for Astronomy Königstuhl 17, D-69117 Heidelberg (Germany); Peters, Thomas [Institut für Computergestützte Wissenschaften, Universität Zürich Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Banerjee, Robi; Buntemeyer, Lars, E-mail: klassm@mcmaster.ca [Hamburger Sternwarte, Universität Hamburg Gojenbergsweg 112, D-21029 Hamburg (Germany)
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid
Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-01
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Owolabi, Kolade M.
2017-03-01
In this paper, some nonlinear space-fractional order reaction-diffusion equations (SFORDE) on a finite but large spatial domain x ∈ [0, L], x = x(x , y , z) and t ∈ [0, T] are considered. Also in this work, the standard reaction-diffusion system with boundary conditions is generalized by replacing the second-order spatial derivatives with Riemann-Liouville space-fractional derivatives of order α, for 0 < α < 2. Fourier spectral method is introduced as a better alternative to existing low order schemes for the integration of fractional in space reaction-diffusion problems in conjunction with an adaptive exponential time differencing method, and solve a range of one-, two- and three-components SFORDE numerically to obtain patterns in one- and two-dimensions with a straight forward extension to three spatial dimensions in a sub-diffusive (0 < α < 1) and super-diffusive (1 < α < 2) scenarios. It is observed that computer simulations of SFORDE give enough evidence that pattern formation in fractional medium at certain parameter value is practically the same as in the standard reaction-diffusion case. With application to models in biology and physics, different spatiotemporal dynamics are observed and displayed.
Institute of Scientific and Technical Information of China (English)
杨建宇; 岳彦利; 宋海荣; 汤赛; 叶思菁; 徐凡
2015-01-01
耕地质量监测是保障耕地资源的永续利用,实现耕地产能提升、加强耕地资源的管理、保护、合理利用的重要措施,对实现持续粮食安全具有重要意义.该文提出了基于空间模拟退火算法的耕地质量布样优化方法,以空间模拟退火算法为基础生成一组最优样本,构成基础监测网络,在此基础上,通过多期耕地等级成果数据提取属性发生变化的分等因素和对应发生变化的区域,生成潜在变化区,并结合研究区实际情况辅以专家知识和异常监测点,对基础样本点进行增加、删除、替换等优化操作,生成最终监测样点.以北京市大兴区为例,最终确定布设55个监测样点,结果表明,该方法布设的样点在耕地质量预测方面的精度高于传统的随机抽样和分层抽样方法,能有效地预测县域耕地质量并监控耕地质量的变化情况.%M Monitoring points in country area are the foundation to reflect changes of cultivated land quality, which directly affect the result of farmland grading and its accuracy. Through the monitoring network for cultivated land quality in county area, the distribution and changing trend of the cultivated land quality can be reflected. Besides, the quality of non-sampled locations should also be estimated with the data of sampling points. Due to the correlation among spatial samples, the traditional methods such as simple random sampling, stratified sampling and systematic sampling are inefficient to accomplish the task above. Thus, we propose a new spatial sampling and optimizing method based on the spatial simulated annealing (SSA). This paper presents a pre-processing method to determine the number of sampling points, including preprocessing the data of cultivated land quality before sampling, exploring the spatial correlation and spatial distribution pattern of cultivated land quality, and computing the appropriate quantity of sampling points by analyzing the
Institute of Scientific and Technical Information of China (English)
XueJianjun; YouXiaohu
1997-01-01
Channel equalization is essential in the Pan-European GSM mobile communication system.The maximum likelihood sequence estimation(MLSE) using the Viterbi algorithm(VA)iscommonly recommended for the dqualization,which can only accommodate the channels with limited time delay spread.In[1],we presented a mean field annealing(MFA)partially connected neural equalizer for the GSM system,in which the complexity is linearly proportional to the time delay spread and therefore relatively fast convergence speed is achieved.But the annealing coefficient of the MFA equalizer is fixed,which is not flexible in timing-varying circumstance such as mobile communications.To decrease the computation of MFA approach so as to make it more easy for practical use,the MFA approach is reated as a homotopy problem.The ordinary equations which the MFA approach should obey are derived.These equations can be used to reflect the deviation of the iteration result from the track of MFA approach.Based on this tesult,an adaptive annealing control algorithm is proposed,which can dynamically control the annealing coefficient according to the iteration deviation.Computer simulations show that our approach can provide a much higher convergence speed and performance improvement over 16-state and 32-state VA's which are usually suggested for practical applications.
de Beurs, Derek P; Terluin, Berend; Verhaak, Peter F
2017-01-01
Background Efficient screening questionnaires are useful in general practice. Computerized adaptive testing (CAT) is a method to improve the efficiency of questionnaires, as only the items that are particularly informative for a certain responder are dynamically selected. Objective The objective of this study was to test whether CAT could improve the efficiency of the Four-Dimensional Symptom Questionnaire (4DSQ), a frequently used self-report questionnaire designed to assess common psychosocial problems in general practice. Methods A simulation study was conducted using a sample of Dutch patients visiting a general practitioner (GP) with psychological problems (n=379). Responders completed a paper-and-pencil version of the 50-item 4DSQ and a psychometric evaluation was performed to check if the data agreed with item response theory (IRT) assumptions. Next, a CAT simulation was performed for each of the four 4DSQ scales (distress, depression, anxiety, and somatization), based on the given responses as if they had been collected through CAT. The following two stopping rules were applied for the administration of items: (1) stop if measurement precision is below a predefined level, or (2) stop if more than half of the items of the subscale are administered. Results In general, the items of each of the four scales agreed with IRT assumptions. Application of the first stopping rule reduced the length of the questionnaire by 38% (from 50 to 31 items on average). When the second stopping rule was also applied, the total number of items could be reduced by 56% (from 50 to 22 items on average). Conclusions CAT seems useful for improving the efficiency of the 4DSQ by 56% without losing a considerable amount of measurement precision. The CAT version of the 4DSQ may be useful as part of an online assessment to investigate the severity of mental health problems of patients visiting a GP. This simulation study is the first step needed for the development a CAT version of the 4
Rampy, Rachel A.
Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and
Institute of Scientific and Technical Information of China (English)
Zu Yun-Xiao; Zhou Jie
2012-01-01
Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed,and a fitness function is provided.Simulations are conducted using the adaptive niche immune genetic algorithm,the simulated annealing algorithm,the quantum genetic algorithm and the simple genetic algorithm,respectively.The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation,and has quick convergence speed and strong global searching capability,which effectively reduces the system power consumption and bit error rate.
Zu, Yun-Xiao; Zhou, Jie
2012-01-01
Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.
Validation through simulations of a C_n^2 profiler for the ESO/VLT Adaptive Optics Facility
Garcia-Rissmann, A.; Guesalaga, A.; Kolb, J.; Le Louarn, M.; Madec, P.-Y.; Neichel, B.
2015-04-01
The Adaptive Optics Facility (AOF) project envisages transforming one of the VLT units into an adaptive telescope and providing its ESO (European Southern Observatory) second generation instruments with turbulence-corrected wavefronts. For MUSE and HAWK-I this correction will be achieved through the GALACSI and GRAAL AO modules working in conjunction with a 1170 actuators deformable secondary mirror (DSM) and the new Laser Guide Star Facility (4LGSF). Multiple wavefront sensors will enable GLAO (ground layer adaptive optics) and LTAO (laser tomography adaptive optics) capabilities, whose performance can greatly benefit from a knowledge about the stratification of the turbulence in the atmosphere. This work, totally based on end-to-end simulations, describes the validation tests conducted on a C_n^2 profiler adapted for the AOF specifications. Because an absolute profile calibration is strongly dependent on a reliable knowledge of turbulence parameters r0 and L0, the tests presented here refer only to normalized output profiles. Uncertainties in the input parameters inherent to the code are tested as well as the profiler response to different turbulence distributions. It adopts a correction for the unseen turbulence, critical for the GRAAL mode, and highlights the effects of masking out parts of the corrected wavefront on the results. Simulations of data with typical turbulence profiles from Paranal were input to the profiler, showing that it is possible to identify reliably the input features for all the AOF modes.
Institute of Scientific and Technical Information of China (English)
钱晓杨
2013-01-01
By discussing the relevance and the multicollinearity problems between physical indicators, this paper improves the existing linear regressing model and proposes a kind of new optimized ridge regression model estimation algorithm based on simulated annealing technique to determine the parameter k. And experiments with reference standard of mse and common sense are made to prove the accuracy and reliability of the algorithm.%以体质指标关联性为研究对象,针对体质指标间存在的多重共线性问题,对现有的线性回归模型进行改选,本文提出一种基于模拟退火技术来确定岭参数k值的改进的岭回归估计模型算法.在实验中,以均方误差和理论常识为参考标准,证明此算法更具有一定的准确性和可靠性.
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
Design and Simulation of a Smart Home managed by an Intelligent Self-Adaptive System
Directory of Open Access Journals (Sweden)
Basman M. Hasan Alhafidh
2016-08-01
Full Text Available Home automation and control systems as basic elements of smart cities have played a key role in the development of our homes environments. They have a wide range of applications in many fields at home such as security and monitoring, healthcare, energy, and entertainment applications. The improvement of humans’ living standards make people keep trying to delegate many of their needs to a home automation system. Such a system has been built with capabilities of predicting what the user intends to do in smart home environment. However, there are many issues that need more investigation and solutions, such as: 1 many researches adopt a specific application without integrating different varieties of applications in one environment, 2 there is no study tries to show the real effect or even evaluates the implementation of predicted actions that have been established via homes intelligent gateway, 3 there is an interoperability issue due to using different kinds of home applications that have different protocols for message context. In this proposal, we will describe a new approach of an intelligent self-adaptive system that can precisely monitor a stakeholder behaviors and analyze his/her actions trying to anticipate a stakeholder behavior in the future. In addition, we will evaluate the real effect of a predicted actions after implementing them by an intelligent gateway in a simulated home environment. The principle behind a prediction process is presented by analyzing a sequence of user’s interaction events with heterogeneous, and distributed nodes in the environment using an intelligent gateway. Predicting next stakeholder action can be process using certain analytical algorithms. The main novelties in the proposed approach are threefold: I Developing a learning technique which is embedded in an intelligent gateway to build a model of users’ behavior and interactions to balance the needs of multiple users within a smart home environment. II
Han, Fei; Lubineau, Gilles; Azdoud, Yan
2016-09-01
The objective (mesh-independent) simulation of evolving discontinuities, such as cracks, remains a challenge. Current techniques are highly complex or involve intractable computational costs, making simulations up to complete failure difficult. We propose a framework as a new route toward solving this problem that adaptively couples local-continuum damage mechanics with peridynamics to objectively simulate all the steps that lead to material failure: damage nucleation, crack formation and propagation. Local-continuum damage mechanics successfully describes the degradation related to dispersed microdefects before the formation of a macrocrack. However, when damage localizes, it suffers spurious mesh dependency, making the simulation of macrocracks challenging. On the other hand, the peridynamic theory is promising for the simulation of fractures, as it naturally allows discontinuities in the displacement field. Here, we present a hybrid local-continuum damage/peridynamic model. Local-continuum damage mechanics is used to describe "volume" damage before localization. Once localization is detected at a point, the remaining part of the energy is dissipated through an adaptive peridynamic model capable of the transition to a "surface" degradation, typically a crack. We believe that this framework, which actually mimics the real physical process of crack formation, is the first bridge between continuum damage theories and peridynamics. Two-dimensional numerical examples are used to illustrate that an objective simulation of material failure can be achieved by this method.
Han, Fei
2016-05-17
The objective (mesh-independent) simulation of evolving discontinuities, such as cracks, remains a challenge. Current techniques are highly complex or involve intractable computational costs, making simulations up to complete failure difficult. We propose a framework as a new route toward solving this problem that adaptively couples local-continuum damage mechanics with peridynamics to objectively simulate all the steps that lead to material failure: damage nucleation, crack formation and propagation. Local-continuum damage mechanics successfully describes the degradation related to dispersed microdefects before the formation of a macrocrack. However, when damage localizes, it suffers spurious mesh dependency, making the simulation of macrocracks challenging. On the other hand, the peridynamic theory is promising for the simulation of fractures, as it naturally allows discontinuities in the displacement field. Here, we present a hybrid local-continuum damage/peridynamic model. Local-continuum damage mechanics is used to describe “volume” damage before localization. Once localization is detected at a point, the remaining part of the energy is dissipated through an adaptive peridynamic model capable of the transition to a “surface” degradation, typically a crack. We believe that this framework, which actually mimics the real physical process of crack formation, is the first bridge between continuum damage theories and peridynamics. Two-dimensional numerical examples are used to illustrate that an objective simulation of material failure can be achieved by this method.
Energy Technology Data Exchange (ETDEWEB)
Sanchez Camacho, Enrique; Andreu Alvarez, Joaquin [Universidad Politecnica de Valencia (Spain)
2001-06-01
Two numerical procedures, based on the Genetic Algorithm (GA) and the Simulated Annealing (SA), are developed to solve the problem of the expansion of capacity of a water resource system. The problem was divided into two subproblems: capital availability and operation policy. Both are optimisation-simulation models, the first one is solved by means of the GA and SA, in each case, while the second one is solved using the Out-of-kilter algorithm (OKA), in both models. The objective function considers the usual benefits and costs in this kind of systems, such as irrigation and hydropower benefits, costs of dam construction and system maintenance. The strength and weakness of both models are evaluated by comparing their results with those obtained with the branch and bound technique, which was classically used to solve this kind of problems. [Spanish] Un par de metodos numericos fundamentados en dos tecnicas de busqueda globales. Algoritmos Genetico (AG) y Recocido Simulado (RS), son desarrollados para resolver el problema de expansion de capacidad de un sistema de recursos hidricos. La estrategia ha sido dividir al problema en dos subproblemas: el de disponibilidad de capital y el de la politica de operacion. Ambos modelos son de optimizacion-simulacion, el primero se realiza mediante los algoritmos del RS y el AG en cada caso, en tanto que el segundo lleva a cabo a traves del algoritmo del Out-of-kilter (AOK) en los dos modelos. La funcion objetivo con que se trabaja considera los beneficios y costos mas comunes en este tipo de sistema, tales como beneficios por riego, por hidroelectricidad y costos de construccion de los embalses y mantenimiento del sistema. La potencia y debilidades delos dos modelos se evaluan mediante la comparacion con los resultados obtenidos a traves de una de las tecnicas mas usadas en este tipo de problemas: la de ramificacion y acotacion.
Institute of Scientific and Technical Information of China (English)
张廷龙; 孙睿; 胡波; 冯丽超
2011-01-01
生态过程模型建立在明确的机理之上,能够较好地模拟陆地生态系统的行为和特征,但模型众多的参数,成为模型具体应用的瓶颈.本文以Biome-BGC模型为例,采用模拟退火算法,对其生理、生态参数进行优化.在优化过程中,先对待优化参数进行了选择,然后采取逐步优化的方法进行优化.结果表明,使用优化后的参数,模型模拟结果与实际观测更为接近,参数优化能有效地降低模型模拟的不确定性.文中参数优化的过程和方法,可为生态模型的参数识别和优化提供一种实例和思路,有助于生态模型应用区域的扩展.%Ecological process model based on defined mechanism can well simulate the dynamic behaviors and features of terrestrial ecosystem, but could become a bottleneck in application because of numerous parameters needed to be confirmed. In this paper, simulated annealing algorithm was used to optimize the physiological and ecological parameters of Biome-BGC model. The first step was to choose some of these parameters to optimize, and then, gradually optimized these parameters. By using the optimized parameters, the model simulation results were much more close to the observed data, and the parameter optimization could effectively reduce the uncertainty of model simulation. The parameter optimization method used in this paper could provide a case and an idea for the parameter identification and optimization of ecological process models,and also, help to expand the application area of the models.
Mostaghimi, P.; Percival, J. R.; Pavlidis, D.; Gorman, G.; Jackson, M.; Neethling, S.; Pain, C. C.
2013-12-01
Numerical simulation of multiphase flow in porous media is of importance in a wide range of applications in science and engineering. We present a novel control volume finite element method (CVFEM) to solve for multi-scale flow in heterogeneous geological formations. It employs a node centred control volume approach to discretize the saturation equation, while a control volume finite element method is applied for the pressure equation. We embed the discrete continuity equation into the pressure equation and assure that the continuity is exactly enforced. Anisotropic mesh adaptivity is used to accurately model the fine grained features of multiphase flow. The adaptive algorithm uses a metric tensor field based on solution error estimates to locally control the size and shape of elements in the metric. Moreover, it uses metric advection between adaptive meshes in order to predict the future required density of mesh thereby reducing numerical dispersion at the saturation front. The scheme is capable of capturing multi-scale heterogeneity such as those in fractured porous media through the use of several constraints on the element size in different regions of porous media. We show the application of our method for simulation of flow in some challenging benchmark problems. For flow in fractured reservoirs, the scheme adapts the mesh as the flow penetrates through the fracture and the matrix. The constraints for the element size within the fracture are smaller by several orders of magnitude than the generated mesh within the matrix. We show that the scheme captures the key multi-scale features of flow while preserving the geometry. We demonstrate that mesh adaptation can be used to accurately simulate flow in heterogeneous porous media at low computational cost.
Lawson, Steven R; Manning, Robert E; Valliere, William A; Wang, Benjamin
2003-07-01
Public visits to parks and protected areas continue to increase and may threaten the integrity of natural and cultural resources and the quality of the visitor experience. Scientists and managers have adopted the concept of carrying capacity to address the impacts of visitor use. In the context of outdoor recreation, the social component of carrying capacity refers to the level of visitor use that can be accommodated in parks and protected areas without diminishing the quality of the visitor experience to an unacceptable degree. This study expands and illustrates the use of computer simulation modeling as a tool for proactive monitoring and adaptive management of social carrying capacity at Arches National Park. A travel simulation model of daily visitor use throughout the Park's road and trail network and at selected attraction sites was developed, and simulations were conducted to estimate a daily social carrying capacity for Delicate Arch, an attraction site in Arches National Park, and for the Park as a whole. Further, a series of simulations were conducted to estimate the effect of a mandatory shuttle bus system on daily social carrying capacity of Delicate Arch to illustrate how computer simulation modeling can be used as a tool to facilitate adaptive management of social carrying capacity.
A Personified Annealing Algorithm for Circles Packing Problem
Institute of Scientific and Technical Information of China (English)
ZHANGDe-Fu; LIXin
2005-01-01
Circles packing problem is an NP-hard problem and is difficult to solve. In this paper, a hybrid search strategy for circles packing problem is discussed. A way of generating new configuration is presented by simulating the moving of elastic objects, which can avoid the blindness of simulated annealing search and make iteration process converge fast. Inspired by the life experiences of people,an effective personified strategy to jump out of local minima is given. Based on the simulated annealing idea and personification strategy, an effective personified annealing algorithm for circles packing problem is developed. Numerical experiments on benchmark problem instances show that the proposed algorithm outperforms the best algorithm in the literature.
Hoogerheide, L.F.; Opschoor, A.; Dijk, van, Nico M.
2012-01-01
This discussion paper was published in the Journal of Econometrics (2012). Vol. 171(2), 101-120. A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of...
AN EVENT DRIVEN SIMULATION FOR ADAPTIVE GENTLE RANDOM EARLY DETECTION (AGRED ALGORITHM
Directory of Open Access Journals (Sweden)
Omid Seifaddini
2014-01-01
Full Text Available Simulations are used to find optimum answers for problems in wide areas. Active queue management algorithms such as RED, GRED, typically use simulators like ns2 which is an open source simulator or OPNET, OMNET which are commercial simulators. However, beside the benefits of using simulators like having defined modules, parameters. There are problems such as complexity, large integrated components and licensing cost. To have an ideal balance in mentioned benefits and problems and to further complement the repository of simulators, this study presents the description of a general-purpose programming language based discrete event simulation for active queue management. This research has focused at developing a discrete event simulator to implement one of active queue management algorithms which is called AGRED. The results showed that the developed simulator has successfully produced the same results with an average deviation of 1.5% as previous simulator in AGRED.
Austenite formation during intercritical annealing
A. Lis; J. Lis
2008-01-01
Purpose: of this paper is the effect of the soft annealing of initial microstructure of the 6Mn16 steel on the kinetics of the austenite formation during next intercritical annealing.Design/methodology/approach: Analytical TEM point analysis with EDAX system attached to Philips CM20 was used to evaluate the concentration of Mn, Ni and Cr in the microstructure constituents of the multiphase steel and mainly Bainite- Martensite islands.Findings: The increase in soft annealing time from 1-60 hou...
Michetti, Davide; Brandsdal, Bjørn Olav; Bon, Davide; Isaksen, Geir Villy; Tiberti, Matteo; Papaleo, Elena
2017-01-01
The psychrophilic and mesophilic endonucleases A (EndA) from Aliivibrio salmonicida (VsEndA) and Vibrio cholera (VcEndA) have been studied experimentally in terms of the biophysical properties related to thermal adaptation. The analyses of their static X-ray structures was no sufficient to rationalize the determinants of their adaptive traits at the molecular level. Thus, we used Molecular Dynamics (MD) simulations to compare the two proteins and unveil their structural and dynamical differences. Our simulations did not show a substantial increase in flexibility in the cold-adapted variant on the nanosecond time scale. The only exception is a more rigid C-terminal region in VcEndA, which is ascribable to a cluster of electrostatic interactions and hydrogen bonds, as also supported by MD simulations of the VsEndA mutant variant where the cluster of interactions was introduced. Moreover, we identified three additional amino acidic substitutions through multiple sequence alignment and the analyses of MD-based protein structure networks. In particular, T120V occurs in the proximity of the catalytic residue H80 and alters the interaction with the residue Y43, which belongs to the second coordination sphere of the Mg2+ ion. This makes T120V an amenable candidate for future experimental mutagenesis. PMID:28192428
A Hybrid Simulated Annealing Algorithm for the Three-Dimensional Packing Problem%求解三维装箱问题的混合模拟退火算法
Institute of Scientific and Technical Information of China (English)
张德富; 彭煜; 朱文兴; 陈火旺
2009-01-01
This paper presents an efficient hybrid simulated annealing algorithm for three dimen-sional container loading problem (3D-CLP). The 3D-CLP is the problem of loading a subset of a given set of rectangular boxes into a rectangular container so that the stowed volume is maxi-mized. The algorithm introduced in this paper is based on three important algorithms. First, complex block generating, complex block can contain any number boxes of different types, which differs from the traditional algorithm. Second, basic heuristic, which is a new construction heu-ristic algorithm used to generate a feasible packing solution from a packing sequence. Third, sim-ulated annealing algorithm, based on the complex block and basic heuristic, it encodes a feasible packing solution as a packing sequence, and searches in the encoding space to find an approxima-ted optimal solution. 1500 benchmark instances with weakly and strongly heterogeneous boxes are considered in this paper. The computational results show that the volume utilization of hybrid algorithm outperforms current excellent algorithms for the considered problem.%提出了一个高效求解三维装箱问题(Three Dimensional Container Loading Problem 3D-CLP)的混合模拟退火算法.三维装箱问题要求装载给定箱子集合的一个子集到容器中,使得被装载的箱子总体积最大.文中介绍的混合模拟退火算法基于三个重要算法:(1)复合块生成算法,与传统算法不同的是文中提出的复合块不只包含单一种类的箱子,而是可以在一定的限制条件下包含任意种类的箱子.(2)基础启发式算法,该算法基于块装载,可以按照指定装载序列生成放置方案.(3)模拟退火算法,以复合块生成和基础启发式算法为基础,将装载序列作为可行放置方案的编码,在编码空间中采用模拟退火算法进行搜索以寻找问题的近似最优解.文中采用1500个弱异构和强异构的装箱问题数据对算法进行测试.
Directory of Open Access Journals (Sweden)
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
Population Annealing: Theory and Application in Spin Glasses
Machta, Jonathan; Wang, Wenlong; Katzgraber, Helmut G.
Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar in efficiency though with different strengths and weaknesses. Supported by NSF DMR-1151387, DMR-1208046 and DMR-1507506.
Institute of Scientific and Technical Information of China (English)
陈香
2013-01-01
In order to effectively solve Arrange fair and objective interview to interview members of the Group ,in this pa-per ,the issues were discussed ,establish its mathematical model ,the model is a complex non -linear integer programming problem .Proposed a packing code ,simulated annealing genetic ,multi-point crossover ,the search for variability in the field of genetic algorithms to solve the mathematical model ,And with an example :30 experts to interview 300 students each interview group of four experts ,with the genetic algorithm to solve the calculation of the examples ,show that the improved genetic algorithm can be efficient for solving the approximate optimal solution of problem solving can meet the job interview fair and reasonable arrangements required to achieve results .%为了有效求解如何安排面试专家组成员工作使面试公正客观的问题，建立面试安排工作数学模型，该模型为复杂的非线性整数规划问题。提出一种装箱编码、模拟退火遗传、多点交叉、领域搜索变异的遗传算法对数学模型进行求解，并以一个30名专家对300名学生进行面试，且每个面试组4名专家的例子用遗传算法进行求解计算。结果表明，改进后的遗传算法能高效求解出问题的近似最优解，求解结果能满足面试工作安排所提出的要求。
Institute of Scientific and Technical Information of China (English)
彭碧涛; 周永务
2011-01-01
传统的车辆路径问题只考虑物品装载的质量属性约束,而忽略其他装载属性约束.针对这种情况,研究了三维装载约束的车辆路径问题,提出了三维装载的处理算法,基于模拟退火算法设计了一种两阶段启发式算法进行求解:第1阶段通过启发式算法得到初始解；第2阶段通过模拟退火算法对初始解进行改进,构造了测试集对结果进行验证.实验结果显示该算法是能够有效的求解该问题.%In the classical vehicle routing problem, it considers the goods weight constraint only,but ignores other loading constraints, such as the loading space constraint. In this paper, the vehicle routing problem with three-dimensional loading constraint taken into account is addressed. A heuristic is proposed for goods loading such that the three-dimensional loading constraint is satisfied and optimized. Then, based on the heuristic, a simulated annealing algorithm is presented to solve the problem. A number of benchmark problems are used to test the proposed method. Results show the algorithm is effective.
Site, Luigi Delle; Junghans, Christoph; Wang, Han
2014-01-01
We describe the adaptive resolution multiscale method AdResS. The conceptual evolution as well as the improvements of its technical efficiency are described step by step, with an explicit reference to current limitations and open problems.
Radiation annealing in cuprous oxide
DEFF Research Database (Denmark)
Vajda, P.
1966-01-01
Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square-root relati......-root relationship between the rate of change of resistivity and the resistivity change. The saturation defect density at room temperature is estimated on the basis of a model for defect creation in cuprous oxide....
Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy
2016-08-25
open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and...relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p-spin model...annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is
Directory of Open Access Journals (Sweden)
Dębski Roman
2016-06-01
Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.
Quantum annealing with manufactured spins.
Johnson, M W; Amin, M H S; Gildert, S; Lanting, T; Hamze, F; Dickson, N; Harris, R; Berkley, A J; Johansson, J; Bunyk, P; Chapple, E M; Enderud, C; Hilton, J P; Karimi, K; Ladizinsky, E; Ladizinsky, N; Oh, T; Perminov, I; Rich, C; Thom, M C; Tolkacheva, E; Truncik, C J S; Uchaikin, S; Wang, J; Wilson, B; Rose, G
2011-05-12
Many interesting but practically intractable problems can be reduced to that of finding the ground state of a system of interacting spins; however, finding such a ground state remains computationally difficult. It is believed that the ground state of some naturally occurring spin systems can be effectively attained through a process called quantum annealing. If it could be harnessed, quantum annealing might improve on known methods for solving certain types of problem. However, physical investigation of quantum annealing has been largely confined to microscopic spins in condensed-matter systems. Here we use quantum annealing to find the ground state of an artificial Ising spin system comprising an array of eight superconducting flux quantum bits with programmable spin-spin couplings. We observe a clear signature of quantum annealing, distinguishable from classical thermal annealing through the temperature dependence of the time at which the system dynamics freezes. Our implementation can be configured in situ to realize a wide variety of different spin networks, each of which can be monitored as it moves towards a low-energy configuration. This programmable artificial spin network bridges the gap between the theoretical study of ideal isolated spin networks and the experimental investigation of bulk magnetic samples. Moreover, with an increased number of spins, such a system may provide a practical physical means to implement a quantum algorithm, possibly allowing more-effective approaches to solving certain classes of hard combinatorial optimization problems.