Adaptive Simulated Annealing Based Protein Loop Modeling of Neurotoxins
陈杰; 黄丽娜; 彭志红
2003-01-01
A loop modeling method, adaptive simulated annealing, for ab initio prediction of protein loop structures, as an optimization problem of searching the global minimum of a given energy function, is proposed. An interface-friendly toolbox-LoopModeller in Windows and Linux systems, VC++ and OpenGL environments is developed for analysis and visualization. Simulation results of three short-chain neurotoxins modeled by LoopModeller show that the method proposed is fast and efficient.
An adaptive approach to the physical annealing strategy for simulated annealing
Hasegawa, M.
2013-02-01
A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.
Sheng, Zheng, E-mail: 19994035@sina.com [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Wang, Jun; Zhou, Bihua [National Defense Key Laboratory on Lightning Protection and Electromagnetic Camouflage, PLA University of Science and Technology, Nanjing 210007 (China); Zhou, Shudao [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044 (China)
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Ry, Rexha Verdhora; Nugraha, Andri Dian
2015-04-01
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger's method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger's result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com [Master Program of Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia); Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id [Global Geophysical Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia)
2015-04-24
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach
Sungwook Kim
2014-01-01
Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Adaptive MANET multipath routing algorithm based on the simulated annealing approach.
Kim, Sungwook
2014-01-01
Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Stochastic Global Optimization and Its Applications with Fuzzy Adaptive Simulated Annealing
Aguiar e Oliveira Junior, Hime; Petraglia, Antonio; Rembold Petraglia, Mariane; Augusta Soares Machado, Maria
2012-01-01
Stochastic global optimization is a very important subject, that has applications in virtually all areas of science and technology. Therefore there is nothing more opportune than writing a book about a successful and mature algorithm that turned out to be a good tool in solving difficult problems. Here we present some techniques for solving several problems by means of Fuzzy Adaptive Simulated Annealing (Fuzzy ASA), a fuzzy-controlled version of ASA, and by ASA itself. ASA is a sophisticated global optimization algorithm that is based upon ideas of the simulated annealing paradigm, coded in the C programming language and developed to statistically find the best global fit of a nonlinear constrained, non-convex cost function over a multi-dimensional space. By presenting detailed examples of its application we want to stimulate the reader’s intuition and make the use of Fuzzy ASA (or regular ASA) easier for everyone wishing to use these tools to solve problems. We kept formal mathematical requirements to a...
Schneider, Johannes J.; Puchta, Markus
2010-12-01
Simulated annealing is the classic physical optimization algorithm, which has been applied to a large variety of problems for many years. Over time, several adaptive mechanisms for decreasing the temperature and thus controlling the acceptance of deteriorations have been developed, based on the measurement of the mean value and the variance of the energy. Here we propose a new simplified approach in which we consider the probability of accepting deteriorations as the main control parameter and derive the temperature by averaging over the last few deteriorations stored in a memory. We present results for the traveling salesman problem and demonstrate, how the amount of data retained influences both the cooling schedule and the quality of the results.
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.
Keystream Generator Based On Simulated Annealing
Ayad A. Abdulsalam
2011-01-01
Full Text Available Advances in the design of keystream generator using heuristic techniques are reported. A simulated annealing algorithm for generating random keystream with large complexity is presented. Simulated annealing technique is adapted to locate these requirements. The definitions for some cryptographic properties are generalized, providing a measure suitable for use as an objective function in a simulated annealing algorithm, seeking randomness that satisfy both correlation immunity and the large linear complexity. Results are presented demonstrating the effectiveness of the method.
multicast utilizando Simulated Annealing
Yezid Donoso
2005-01-01
Full Text Available En este artículo se presenta un método de optimización multiobjetivo para la solución del problema de balanceo de carga en redes de transmisión multicast, apoyándose en la aplicación de la meta-heurística de Simulated Annealing (Recocido Simulado. El método minimiza cuatro parámetros básicos para garantizar la calidad de servicio en transmisiones multicast: retardo origen destino, máxima utilización de enlaces, ancho de banda consumido y número de saltos. Los resultados devueltos por la heurística serán comparados con los resultados arrojados por el modelo matemático propuesto en investigaciones anteriores.
Recursive simulation of quantum annealing
Sowa, A P; Samson, J H; Savel'ev, S E; Zagoskin, A M; Heidel, S; Zúñiga-Anaya, J C
2015-01-01
The evaluation of the performance of adiabatic annealers is hindered by lack of efficient algorithms for simulating their behaviour. We exploit the analyticity of the standard model for the adiabatic quantum process to develop an efficient recursive method for its numerical simulation in case of both unitary and non-unitary evolution. Numerical simulations show distinctly different distributions for the most important figure of merit of adiabatic quantum computing --- the success probability --- in these two cases.
Feasibility of Simulated Annealing Tomography
Vo, Nghia T; Moser, Herbert O
2014-01-01
Simulated annealing tomography (SAT) is a simple iterative image reconstruction technique which can yield a superior reconstruction compared with filtered back-projection (FBP). However, the very high computational cost of iteratively calculating discrete Radon transform (DRT) has limited the feasibility of this technique. In this paper, we propose an approach based on the pre-calculated intersection lengths array (PILA) which helps to remove the step of computing DRT in the simulated annealing procedure and speed up SAT by over 300 times. The enhancement of convergence speed of the reconstruction process using the best of multiple-estimate (BoME) strategy is introduced. The performance of SAT under different conditions and in comparison with other methods is demonstrated by numerical experiments.
Residual entropy and simulated annealing
Ettelaie, R.; Moore, M. A.
1985-01-01
Determining the residual entropy in the simulated annealing approach to optimization is shown to provide useful information on the true ground state energy. The one-dimensional Ising spin glass is studied to exemplify the procedure and in this case the residual entropy is related to the number of one-spin flip stable metastable states. The residual entropy decreases to zero only logarithmically slowly with the inverse cooling rate.
Naskar, Pulak; Talukder, Srijeeta; Chaudhury, Pinaki
2017-04-05
In this communication, we would like to discuss the advantages of adaptive mutation simulated annealing (AMSA) over standard simulated annealing (SA) in studying the Coulombic explosion of (CO2)n(2+) clusters for n = 20-68, where 'n' is the size of the cluster. We have demonstrated how AMSA itself can overcome the predicaments which can arise in conventional SA and carry out the search for better results by adapting the parameters (only when needed) dynamically during the simulations so that the search process can come out of high energy basins and not go astray for better exploration and convergence, respectively. This technique also has in-built properties for getting more than one minimum in a single run. For a (CO2)n(2+) cluster system we have found the critical limit to be n = 43, above which the attractive forces between individual units become greater in value than that of the large repulsive forces and the clusters stay intact as the energetically favoured isomers. This result is in good concurrence with earlier studies. Moreover, we have studied the fragmentation patterns for the entire size range and we have found fission type fragmentation as the favoured mechanism nearly for all sizes.
Simulated annealing model of acupuncture
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
An Adaptive Filtering Algorithm using Mean Field Annealing Techniques
Persson, Per; Nordebo, Sven; Claesson, Ingvar
2002-01-01
We present a new approach to discrete adaptive filtering based on the mean field annealing algorithm. The main idea is to find the discrete filter vector that minimizes the matrix form of the Wiener-Hopf equations in a least-squares sense by a generalized mean field annealing algorithm. It is indicated by simulations that this approach, with complexity O(M^2) where M is the filter length, finds a solution comparable to the one obtained by the recursive least squares (RLS) algorithm but withou...
Cylinder packing by simulated annealing
M. Helena Correia
2000-12-01
Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.
Quantum Adiabatic Evolution Algorithms versus Simulated Annealing
Farhi, E; Gutmann, S; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam
2002-01-01
We explain why quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time.
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Stochastic annealing simulation of cascades in metals
Heinisch, H.L.
1996-04-01
The stochastic annealing simulation code ALSOME is used to investigate quantitatively the differential production of mobile vacancy and SIA defects as a function of temperature for isolated 25 KeV cascades in copper generated by MD simulations. The ALSOME code and cascade annealing simulations are described. The annealing simulations indicate that the above Stage V, where the cascade vacancy clusters are unstable,m nearly 80% of the post-quench vacancies escape the cascade volume, while about half of the post-quench SIAs remain in clusters. The results are sensitive to the relative fractions of SIAs that occur in small, highly mobile clusters and large stable clusters, respectively, which may be dependent on the cascade energy.
An Application of Simulated Annealing to Scheduling Army Unit Training
1986-10-01
Simulated annealing operates by analogy to the metalurgy process which strengthens metals through successive heating and cooling. The method is highly...diminishing returns is observed. The simulated annealing heuristic operates by analogy to annealing in physical systems. Annealing in a physical
A simulated annealing technique for multi-objective simulation optimization
Mahmoud H. Alrefaei; Diabat, Ali H.
2009-01-01
In this paper, we present a simulated annealing algorithm for solving multi-objective simulation optimization problems. The algorithm is based on the idea of simulated annealing with constant temperature, and uses a rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge almost surely to an optimal solution. It is applied to a multi-objective inventory problem; the numerical results show that the algorithm ...
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Binary Sparse Phase Retrieval via Simulated Annealing
Wei Peng
2016-01-01
Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Comparative study of the performance of quantum annealing and simulated annealing.
Nishimori, Hidetoshi; Tsuda, Junichi; Knysh, Sergey
2015-01-01
Relations of simulated annealing and quantum annealing are studied by a mapping from the transition matrix of classical Markovian dynamics of the Ising model to a quantum Hamiltonian and vice versa. It is shown that these two operators, the transition matrix and the Hamiltonian, share the eigenvalue spectrum. Thus, if simulated annealing with slow temperature change does not encounter a difficulty caused by an exponentially long relaxation time at a first-order phase transition, the same is true for the corresponding process of quantum annealing in the adiabatic limit. One of the important differences between the classical-to-quantum mapping and the converse quantum-to-classical mapping is that the Markovian dynamics of a short-range Ising model is mapped to a short-range quantum system, but the converse mapping from a short-range quantum system to a classical one results in long-range interactions. This leads to a difference in efficiencies that simulated annealing can be efficiently simulated by quantum annealing but the converse is not necessarily true. We conclude that quantum annealing is easier to implement and is more flexible than simulated annealing. We also point out that the present mapping can be extended to accommodate explicit time dependence of temperature, which is used to justify the quantum-mechanical analysis of simulated annealing by Somma, Batista, and Ortiz. Additionally, an alternative method to solve the nonequilibrium dynamics of the one-dimensional Ising model is provided through the classical-to-quantum mapping.
MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING
Ladislav Rosocha
2015-07-01
Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.
A Parallel Genetic Simulated Annealing Hybrid Algorithm for Task Scheduling
SHU Wanneng; ZHENG Shijue
2006-01-01
In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing .It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively.When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole.From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.
Hierarchical Network Design Using Simulated Annealing
Thomadsen, Tommy; Clausen, Jens
2002-01-01
The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....
Remote sensing of atmospheric duct parameters using simulated annealing
Zhao Xiao-Feng; Huang Si-Xun; Xiang Jie; Shi Wei-Lai
2011-01-01
Simulated annealing is one of the robust optimization schemes. Simulated annealing mimics the annealing process of the slow cooling of a heated metal to reach a stable minimum energy state. In this paper,we adopt simulated annealing to study the problem of the remote sensing of atmospheric duct parameters for two different geometries of propagation measurement. One is from a single emitter to an array of radio receivers (vertical measurements),and the other is from the radar clutter returns (horizontal measurements). Basic principles of simulated annealing and its applications to refractivity estimation are introduced. The performance of this method is validated using numerical experiments and field measurements collected at the East China Sea. The retrieved results demonstrate the feasibility of simulated annealing for near real-time atmospheric refractivity estimation. For comparison,the retrievals of the genetic algorithm are also presented. The comparisons indicate that the convergence speed of simulated annealing is faster than that of the genetic algorithm,while the anti-noise ability of the genetic algorithm is better than that of simulated annealing.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Shi-hua Zhan
2016-01-01
Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
A NEW GENETIC SIMULATED ANNEALING ALGORITHM FOR FLOOD ROUTING MODEL
KANG Ling; WANG Cheng; JIANG Tie-bing
2004-01-01
In this paper, a new approach, the Genetic Simulated Annealing (GSA), was proposed for optimizing the parameters in the Muskingum routing model. By integrating the simulated annealing method into the genetic algorithm, the hybrid method could avoid some troubles of traditional methods, such as arduous trial-and-error procedure, premature convergence in genetic algorithm and search blindness in simulated annealing. The principle and implementing procedure of this algorithm were described. Numerical experiments show that the GSA can adjust the optimization population, prevent premature convergence and seek the global optimal result.Applications to the Nanyunhe River and Qingjiang River show that the proposed approach is of higher forecast accuracy and practicability.
Kriging-approximation simulated annealing algorithm for groundwater modeling
Shen, C. H.
2015-12-01
Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Population annealing simulations of a binary hard-sphere mixture
Callaham, Jared; Machta, Jonathan
2017-06-01
Population annealing is a sequential Monte Carlo scheme well suited to simulating equilibrium states of systems with rough free energy landscapes. Here we use population annealing to study a binary mixture of hard spheres. Population annealing is a parallel version of simulated annealing with an extra resampling step that ensures that a population of replicas of the system represents the equilibrium ensemble at every packing fraction in an annealing schedule. The algorithm and its equilibration properties are described, and results are presented for a glass-forming fluid composed of a 50/50 mixture of hard spheres with diameter ratio of 1.4:1. For this system, we obtain precise results for the equation of state in the glassy regime up to packing fractions φ ≈0.60 and study deviations from the Boublik-Mansoori-Carnahan-Starling-Leland equation of state. For higher packing fractions, the algorithm falls out of equilibrium and a free volume fit predicts jamming at packing fraction φ ≈0.667 . We conclude that population annealing is an effective tool for studying equilibrium glassy fluids and the jamming transition.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry.
SIMULATED ANNEALING BASED POLYNOMIAL TIME QOS ROUTING ALGORITHM FOR MANETS
Liu Lianggui; Feng Guangzeng
2006-01-01
Multi-constrained Quality-of-Service (QoS) routing is a big challenge for Mobile Ad hoc Networks (MANETs) where the topology may change constantly. In this paper a novel QoS Routing Algorithm based on Simulated Annealing (SA_RA) is proposed. This algorithm first uses an energy function to translate multiple QoS weights into a single mixed metric and then seeks to find a feasible path by simulated annealing. The paper outlines simulated annealing algorithm and analyzes the problems met when we apply it to Qos Routing (QoSR) in MANETs. Theoretical analysis and experiment results demonstrate that the proposed method is an effective approximation algorithms showing better performance than the other pertinent algorithm in seeking the (approximate) optimal configuration within a period of polynomial time.
A theoretical comparison of evolutionary algorithms and simulated annealing
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
Coordination Hydrothermal Interconnection Java-Bali Using Simulated Annealing
Wicaksono, B.; Abdullah, A. G.; Saputra, W. S.
2016-04-01
Hydrothermal power plant coordination aims to minimize the total cost of operating system that is represented by fuel costand constraints during optimization. To perform the optimization, there are several methods that can be used. Simulated Annealing (SA) is a method that can be used to solve the optimization problems. This method was inspired by annealing or cooling process in the manufacture of materials composed of crystals. The basic principle of hydrothermal power plant coordination includes the use of hydro power plants to support basic load while thermal power plants were used to support the remaining load. This study used two hydro power plant units and six thermal power plant units with 25 buses by calculating transmission losses and considering power limits in each power plant unit aided by MATLAB software during the process. Hydrothermal power plant coordination using simulated annealing plants showed that a total cost of generation for 24 hours is 13,288,508.01.
Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing
Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.
2006-01-01
The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval arithm
Analysis of Trivium by a Simulated Annealing variant
Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian
2010-01-01
. A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
Molecular dynamics simulation of annealed ZnO surfaces
Min, Tjun Kit; Yoon, Tiem Leong [School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Lim, Thong Leng [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia)
2015-04-24
The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.
Wenbo Wu; Jiahong Liang; Xinyu Yao; Baohong Liu
2014-01-01
This paper addresses the problem of task allocation in real-time distributed systems with the goal of maximizing the system reliability, which has been shown to be NP-hard. We take account of the deadline constraint to formulate this problem and then propose an algorithm called chaotic adaptive simulated annealing (XASA) to solve the problem. Firstly, XASA begins with chaotic optimization which takes a chaotic walk in the solution space and generates several local minima; secondly XASA improv...
Ranking important nodes in complex networks by simulated annealing
Sun, Yu; Yao, Pei-Yang; Wan, Lu-Jun; Shen, Jian; Zhong, Yun
2017-02-01
In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. Project supported by the National Natural Science Foundation of China (Grant No. 61573017) and the Natural Science Foundation of Shaanxi Province, China (Grant No. 2016JQ6062).
Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems
Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad
2014-04-01
This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.
Multi-Objective Simulating Annealing for Permutation Flow Shop Problems
Mokotoff, E.; Pérez, J.
2007-09-01
Real life scheduling problems require more than one criterion. Nevertheless, the complex nature of the Permutation Flow Shop problem has prevented the development of models with multiple criteria. Considering only one regular criterion, this scheduling problem was shown to be NP-complete. The Multi-Objective Simulated Annealing (MOSA) methods are metaheuristics based on Simulated Annealing to solve Multi-Objective Combinatorial Optimization (MOCO) problems, like the problem at hand. Starting from the general MOSA method introduced by Loukil et al. [1], we developed MOSA models to provide the decision maker with efficient solutions for the Permutation Flow Shop problem (common in the production of ceramic tiles). In this paper we present three models: two bicriteria models and one based on satisfaction levels for the main criterion.
Simulated Annealing for the 0/1 Multidimensional Knapsack Problem
Fubin Qian; Rui Ding
2007-01-01
In this paper a simulated annealing (SA) algorithm is presented for the 0/1 multidimensional knapsack problem. Problem-specific knowledge is incorporated in the algorithm description and evaluation of parameters in order to look into the performance of finite-time implementations of SA. Computational results show that SA performs much better than a genetic algorithm in terms of solution time, whilst having a modest loss of solution quality.
Solving geometric constraints with genetic simulated annealing algorithm
刘生礼; 唐敏; 董金祥
2003-01-01
This paper applies genetic simulated annealing algorithm (SAGA) to solving geometric constraint problems. This method makes full use of the advantages of SAGA and can handle under-/over- constraint problems naturally. It has advantages (due to its not being sensitive to the initial values) over the Newton-Raphson method, and its yielding of multiple solutions, is an advantage over other optimal methods for multi-solution constraint system. Our experiments have proved the robustness and efficiency of this method.
Rayleigh wave inversion using heat-bath simulated annealing algorithm
Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng
2016-11-01
The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Simulated annealing spectral clustering algorithm for image segmentation
Yifang Yang; and Yuping Wang
2014-01-01
The similarity measure is crucial to the performance of spectral clustering. The Gaussian kernel function based on the Euclidean distance is usual y adopted as the similarity mea-sure. However, the Euclidean distance measure cannot ful y reveal the complex distribution data, and the result of spectral clustering is very sensitive to the scaling parameter. To solve these problems, a new manifold distance measure and a novel simulated anneal-ing spectral clustering (SASC) algorithm based on the manifold distance measure are proposed. The simulated annealing based on genetic algorithm (SAGA), characterized by its rapid conver-gence to the global optimum, is used to cluster the sample points in the spectral mapping space. The proposed algorithm can not only reflect local and global consistency better, but also reduce the sensitivity of spectral clustering to the kernel parameter, which improves the algorithm’s clustering performance. To efficiently ap-ply the algorithm to image segmentation, the Nystr¨om method is used to reduce the computation complexity. Experimental re-sults show that compared with traditional clustering algorithms and those popular spectral clustering algorithms, the proposed algorithm can achieve better clustering performances on several synthetic datasets, texture images and real images.
Simulated annealing approach to the max cut problem
Sen, Sandip
1993-03-01
In this paper we address the problem of partitioning the nodes of a random graph into two sets, so as to maximize the sum of the weights on the edges connecting nodes belonging to different sets. This problem has important real-life counterparts, but has been proven to be NP-complete. As such, a number of heuristic solution techniques have been proposed in literature to address this problem. We propose a stochastic optimization technique, simulated annealing, to find solutions for the max cut problem. Our experiments verify that good solutions to the problem can be found using this algorithm in a reasonable amount of time.
Optimal estuarine sediment monitoring network design with simulated annealing.
Nunes, L M; Caeiro, S; Cunha, M C; Ribeiro, L
2006-02-01
An objective function based on geostatistical variance reduction, constrained to the reproduction of the probability distribution functions of selected physical and chemical sediment variables, is applied to the selection of the best set of compliance monitoring stations in the Sado river estuary in Portugal. These stations were to be selected from a large set of sampling stations from a prior field campaign. Simulated annealing was chosen to solve the optimisation function model. Both the combinatorial problem structure and the resulting candidate sediment monitoring networks are discussed, and the optimal dimension and spatial distribution are proposed. An optimal network of sixty stations was obtained from an original 153-station sampling campaign.
Stochastic annealing simulations of defect interactions among subcascades
Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N.
1997-04-01
The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.
Cheng-Ming Lee
2016-11-01
Full Text Available A reinforcement learning algorithm is proposed to improve the accuracy of short-term load forecasting (STLF in this article. The proposed model integrates radial basis function neural network (RBFNN, support vector regression (SVR, and adaptive annealing learning algorithm (AALA. In the proposed methodology, firstly, the initial structure of RBFNN is determined by using an SVR. Then, an AALA with time-varying learning rates is used to optimize the initial parameters of SVR-RBFNN (AALA-SVR-RBFNN. In order to overcome the stagnation for searching optimal RBFNN, a particle swarm optimization (PSO is applied to simultaneously find promising learning rates in AALA. Finally, the short-term load demands are predicted by using the optimal RBFNN. The performance of the proposed methodology is verified on the actual load dataset from the Taiwan Power Company (TPC. Simulation results reveal that the proposed AALA-SVR-RBFNN can achieve a better load forecasting precision compared to various RBFNNs.
Sparse approximation problem: how rapid simulated annealing succeeds and fails
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Simulated annealing technique to design minimum cost exchanger
Khalfe Nadeem M.
2011-01-01
Full Text Available Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change the design and geometric parameters to satisfy a given heat duty and constraints. Although well proven, this kind of approach is time consuming and may not lead to cost effective design as no cost criteria are explicitly accounted for. The present study explores the use of nontraditional optimization technique: called simulated annealing (SA, for design optimization of shell and tube heat exchangers from economic point of view. The optimization procedure involves the selection of the major geometric parameters such as tube diameters, tube length, baffle spacing, number of tube passes, tube layout, type of head, baffle cut etc and minimization of total annual cost is considered as design target. The presented simulated annealing technique is simple in concept, few in parameters and easy for implementations. Furthermore, the SA algorithm explores the good quality solutions quickly, giving the designer more degrees of freedom in the final choice with respect to traditional methods. The methodology takes into account the geometric and operational constraints typically recommended by design codes. Three different case studies are presented to demonstrate the effectiveness and accuracy of proposed algorithm. The SA approach is able to reduce the total cost of heat exchanger as compare to cost obtained by previously reported GA approach.
Simulated Annealing-Based Krill Herd Algorithm for Global Optimization
Gai-Ge Wang
2013-01-01
Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
Morgan, John A
2016-01-01
The method of simulated annealing is adapted to the temperature-emissivity separation (TES) problem. A patch of surface at the bottom of the atmosphere is assumed to be a greybody emitter with spectral emissivity $\\epsilon(k)$ describable by a mixture of spectral endmembers. We prove that a simulated annealing search conducted according to a suitable schedule converges to a solution maximizing the $\\textit{A-Posteriori}$ probability that spectral radiance detected at the top of the atmosphere originates from a patch with stipulated $T$ and $\\epsilon(k)$. Any such solution will be nonunique. The average of a large number of simulated annealing solutions, however, converges almost surely to a unique Maximum A-Posteriori solution for $T$ and $\\epsilon(k)$. The limitation to a stipulated set of endmember emissivities may be relaxed by allowing the number of endmembers to grow without bound, and to be generic continuous functions of wavenumber with bounded first derivatives with respect to wavenumber.
Almaraashia, M.; John, Robert; Hopgood, A.; S. Ahmadi
2016-01-01
This paper reports the use of simulated annealing to design more efficient fuzzy logic systems to model problems with associated uncertainties. Simulated annealing is used within this work as a method for learning the best configurations of interval and general type-2 fuzzy logic systems to maximize their modeling ability. The combination of simulated annealing with these models is presented in the modeling of four benchmark problems including real-world problems. The type-2 fuzzy logic syste...
spsann - optimization of sample patterns using spatial simulated annealing
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
A Scheduling Algorithm Based on Petri Nets and Simulated Annealing
Rachida H. Ghoul
2007-01-01
Full Text Available This study aims at presenting a hybrid Flexible Manufacturing System "HFMS" short-term scheduling problem. Based on the art state of general scheduling algorithms, we present the meta-heuristic, we have decided to apply for a given example of HFMS. That was the study of Simulated Annealing Algorithm SA. The HFMS model based on hierarchical Petri nets, was used to represent static and dynamic behavior of the HFMS and design scheduling solutions. Hierarchical Petri nets model was regarded as being made up a set of single timed colored Petri nets models. Each single model represents one process which was composed of many operations and tasks. The complex scheduling problem was decomposed in simple sub-problems. Scheduling algorithm was applied on each sub model in order to resolve conflicts on shared production resources.
Memoryless cooperative graph search based on the simulated annealing algorithm
Hou Jian; Yan Gang-Feng; Fan Zhen
2011-01-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Restoration of polarimetric SAR images using simulated annealing
Schou, Jesper; Skriver, Henning
2001-01-01
approach favoring one of the objectives. An algorithm for estimating the radar cross-section (RCS) for intensity SAR images has previously been proposed in the literature based on Markov random fields and the stochastic optimization method simulated annealing. A new version of the algorithm is presented...... are obtained while at the same time preserving most of the structures in the image. The algorithm is evaluated using multilook polarimetric L-band data from the Danish airborne EMISAR system, and the impact of the algorithm on the unsupervised H-α classification is demonstrated......Filtering synthetic aperture radar (SAR) images ideally results in better estimates of the parameters characterizing the distributed targets in the images while preserving the structures of the nondistributed targets. However, these objectives are normally conflicting, often leading to a filtering...
Optimization of multiple-layer microperforated panels by simulated annealing
Ruiz Villamil, Heidi; Cobo, Pedro; Jacobsen, Finn
2011-01-01
Sound absorption by microperforated panels (MPP) has received increasing attention the past years as an alternative to conventional porous absorbers in applications with special cleanliness and health requirements. The absorption curve of an MPP depends on four parameters: the holes diameter......, the panel thickness, the perforation ratio, and the thickness of the air cavity between the panel and an impervious wall. It is possible to find a proper combination of these parameters that provides an MPP absorbing in one octave band or two, within the frequency range of interest for noise control....... Therefore, simulated annealing is proposed in this paper as a tool to solve the optimization problem of finding the best combination of the constitutive parameters of an ML-MPP providing the maximum average absorption within a prescribed frequency band....
Simulated annealing and joint manufacturing batch-sizing
Sarker Ruhul
2003-01-01
Full Text Available We address an important problem of a manufacturing system. The system procures raw materials from outside suppliers in a lot and processes them to produce finished goods. It proposes an ordering policy for raw materials to meet the requirements of a production facility. In return, this facility has to deliver finished products demanded by external buyers at fixed time intervals. First, a general cost model is developed considering both raw materials and finished products. Then this model is used to develop a simulated annealing approach to determining an optimal ordering policy for procurement of raw materials and also for the manufacturing batch size to minimize the total cost for meeting customer demands in time. The solutions obtained were compared with those of traditional approaches. Numerical examples are presented. .
Fuzzy unit commitment solution - A novel twofold simulated annealing approach
Saber, Ahmed Yousuf; Senjyu, Tomonobu; Yona, Atsushi; Urasaki, Naomitsu [Faculty of Engineering, University of the Ryukyus, 1 Senbaru, Nishihara-cho Nakagami, Okinawa 903-0213 (Japan); Funabashi, Toshihisa [Meidensha Corporation, Riverside Building 36-2, Tokyo 103-8515 (Japan)
2007-10-15
The authors propose a twofold simulated annealing (twofold-SA) method for the optimization of fuzzy unit commitment formulation in this paper. In the proposed method, simulated annealing (SA) and fuzzy logic are combined to obtain SA acceptance probabilities from fuzzy membership degrees. Fuzzy load is calculated from error statistics and an initial solution is generated by a priority list method. The initial solution is decomposed into hourly-schedules and each hourly-schedule is modified by decomposed-SA using a bit flipping operator. Fuzzy membership degrees are the selection attributes of the decomposed-SA. A new solution consists of these hourly-schedules of entire scheduling period after repair, as unit-wise constraints may not be fulfilled at the time of an individual hourly-schedule modification. This helps to detect and modify promising schedules of appropriate hours. In coupling-SA, this new solution is accepted for the next iteration if its cost is less than that of current solution. However, a higher cost new solution is accepted with the temperature dependent total cost membership function. Computation time of the proposed method is also improved by the imprecise tolerance of the fuzzy model. Besides, excess units with the system dependent probability distribution help to handle constraints efficiently and imprecise economic load dispatch (ELD) calculations are modified to save the execution time. The proposed method is tested using standard reported data sets. Numerical results show an improvement in solution cost and time compared to the results obtained from other existing methods. (author)
Wenbo Wu
2014-01-01
Full Text Available This paper addresses the problem of task allocation in real-time distributed systems with the goal of maximizing the system reliability, which has been shown to be NP-hard. We take account of the deadline constraint to formulate this problem and then propose an algorithm called chaotic adaptive simulated annealing (XASA to solve the problem. Firstly, XASA begins with chaotic optimization which takes a chaotic walk in the solution space and generates several local minima; secondly XASA improves SA algorithm via several adaptive schemes and continues to search the optimal based on the results of chaotic optimization. The effectiveness of XASA is evaluated by comparing with traditional SA algorithm and improved SA algorithm. The results show that XASA can achieve a satisfactory performance of speedup without loss of solution quality.
Enhanced Simulated Annealing for Solving Aggregate Production Planning
Mohd Rizam Abu Bakar
2016-01-01
Full Text Available Simulated annealing (SA has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS, the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy.
Adaptive Sampling in Hierarchical Simulation
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
I Gede Agus Widyadana
2002-01-01
Full Text Available The research is focused on comparing Genetics algorithm and Simulated Annealing in the term of performa and processing time. The main purpose is to find out performance both of the algorithm to solve minimizing makespan and total flowtime in a particular flowshop system. Performances of the algorithms are found by simulating problems with variation of jobs and machines combination. The result show the Simulated Annealing is much better than the Genetics up to 90%. The Genetics, however, only had score in processing time, but the trend that plotted suggest that in problems with lots of jobs and lots of machines, the Simulated Annealing will run much faster than the Genetics. Abstract in Bahasa Indonesia : Penelitian ini difokuskan pada pembandingan algoritma Genetika dan Simulated Annealing ditinjau dari aspek performa dan waktu proses. Tujuannya adalah untuk melihat kemampuan dua algoritma tersebut untuk menyelesaikan problem-problem penjadwalan flow shop dengan kriteria minimasi makespan dan total flowtime. Kemampuan kedua algoritma tersebut dilihat dengan melakukan simulasi yang dilakukan pada kombinasi-kombinasi job dan mesin yang berbeda-beda. Hasil simulasi menunjukan algoritma Simulated Annealing lebih unggul dari algoritma Genetika hingga 90%, algoritma Genetika hanya unggul pada waktu proses saja, namun dengan tren waktu proses yang terbentuk, diyakini pada problem dengan kombinasi job dan mesin yang banyak, algoritma Simulated Annealing dapat lebih cepat daripada algoritma Genetika. Kata kunci: Algoritma Genetika, Simulated Annealing, flow shop, makespan, total flowtime.
Traveling Salesman Approach for Solving Petrol Distribution Using Simulated Annealing
Zuhaimy Ismail
2008-01-01
Full Text Available This research presents an attempt to solve a logistic company's problem of delivering petrol to petrol station in the state of Johor. This delivery system is formulated as a travelling salesman problem (TSP. TSP involves finding an optimal route for visiting stations and returning to point of origin, where the inter-station distance is symmetric and known. This real world application is a deceptive simple combinatorial problem and our approach is to develop solutions based on the idea of local search and meta-heuristics. As a standard problem, we have chosen a solution is a deceptively simple combinatorial problem and we defined it simply as the time spends or distance travelled by salesman visiting n cities (or nodes cyclically. In one tour the vehicle visits each station just once and finishes up where he started. As standard problems, we have chosen TSP with different stations visited once. This research presents the development of solution engine based on local search method known as Greedy Method and with the result generated as the initial solution, Simulated Annealing (SA and Tabu Search (TS further used to improve the search and provide the best solution. A user friendly optimization program developed using Microsoft C++ to solve the TSP and provides solutions to future TSP which may be classified into daily or advanced management and engineering problems.
Optical Design of Multilayer Achromatic Waveplate by Simulated Annealing Algorithm
Jun Ma; Jing-Shan Wang; Carsten Denker; Hai-Min Wang
2008-01-01
We applied a Monte Carlo method-simulated annealing algorithm-to carry out the design of multilayer achromatic waveplate. We present solutions for three-, six-and ten-layer achromatic waveplates. The optimized retardance settings are found to be 89°51'39"±0°33'37" and 89°54'46"±0°22'4" for the six-and ten-layer waveplates, respectively, for a wavelength range from 1000nm to 1800nm. The polarimetric properties of multilayer waveplates are investigated based on several numerical experiments. In contrast to previously proposed three-layer achromatic waveplate, the fast axes of the new six-and ten-layer achromatic waveplate remain at fixed angles, independent of the wavelength. Two applications of multilayer achromatic waveplate are discussed, the general-purpose phase shifter and the birefringent filter in the Infrared Imaging Magnetograph (IRIM) system of the Big Bear Solar Observatory (BBSO). We also checked an experimental method to measure the retardance of waveplates.
Simulated Annealing Technique for Routing in a Rectangular Mesh Network
Noraziah Adzhar
2014-01-01
Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.
A Simulated Annealing Approach for the Train Design Optimization Problem
Federico Alonso-Pecina
2017-01-01
Full Text Available The Train Design Optimization Problem regards making optimal decisions on the number and movement of locomotives and crews through a railway network, so as to satisfy requested pick-up and delivery of car blocks at stations. In a mathematical programming formulation, the objective function to minimize is composed of the costs associated with the movement of locomotives and cars, the loading/unloading operations, the number of locomotives, and the crews’ return to their departure stations. The constraints include upper bounds for number of car blocks per locomotive, number of car block swaps, and number of locomotives passing through railroad segments. We propose here a heuristic method to solve this highly combinatorial problem in two steps. The first one finds an initial, feasible solution by means of an ad hoc algorithm. The second step uses the simulated annealing concept to improve the initial solution, followed by a procedure aiming to further reduce the number of needed locomotives. We show that our results are competitive with those found in the literature.
Hybrid annealing using a quantum simulator coupled to a classical computer
Graß, Tobias
2016-01-01
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Simulated annealing is a computational technique which explores the configuration space by mimicking thermal noise. By slow cooling, it freezes the system in a low-energy configuration, but the algorithm often gets stuck in local minima. In quantum annealing, the thermal noise is replaced by controllable quantum fluctuations, and the technique can be implemented in modern quantum simulators. However, quantum-adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configurati...
Differential evolution and simulated annealing algorithms for mechanical systems design
H. Saruhan
2014-09-01
Full Text Available In this study, nature inspired algorithms – the Differential Evolution (DE and the Simulated Annealing (SA – are utilized to seek a global optimum solution for ball bearings link system assembly weight with constraints and mixed design variables. The Genetic Algorithm (GA and the Evolution Strategy (ES will be a reference for the examination and validation of the DE and the SA. The main purpose is to minimize the weight of an assembly system composed of a shaft and two ball bearings. Ball bearings link system is used extensively in many machinery applications. Among mechanical systems, designers pay great attention to the ball bearings link system because of its significant industrial importance. The problem is complex and a time consuming process due to mixed design variables and inequality constraints imposed on the objective function. The results showed that the DE and the SA performed and obtained convergence reliability on the global optimum solution. So the contribution of the DE and the SA application to the mechanical system design can be very useful in many real-world mechanical system design problems. Beside, the comparison confirms the effectiveness and the superiority of the DE over the others algorithms – the SA, the GA, and the ES – in terms of solution quality. The ball bearings link system assembly weight of 634,099 gr was obtained using the DE while 671,616 gr, 728213.8 gr, and 729445.5 gr were obtained using the SA, the ES, and the GA respectively.
Sensitivity study on hydraulic well testing inversion using simulated annealing
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.
Wanneng Shu
2009-01-01
Quantum-inspired genetic algorithm (QGA) is applied to simulated annealing (SA) to develop a class of quantum-inspired simulated annealing genetic algorithm (QSAGA) for combinatorial optimization. With the condition of preserving QGA advantages, QSAGA takes advantage of the SA algorithm so as to avoid premature convergence. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem. The results show that QSAGA performs well, without premature conve...
Gregorius Satia Budhi
2003-01-01
Full Text Available Flexible Manufacturing System (FMS is a manufacturing system that is formed from several Numerical Controlled Machines combine with material handling system, so that different jobs can be worked by different machines sequences. FMS combine the high productivity and flexibility of Transfer Line and Job Shop manufacturing system. In this reasearch, Activity-Based Costing(ABC approach was used as the weight to search the operation route in the proper machine, so that the total production cost can be optimized. The search method that was used in this experiment is Simulated Annealling, a variant form Hill Climbing Search method. An ideal operation time to proses a part was used as the annealling schedule. From the empirical test, it could be proved that the use of ABC approach and Simulated Annealing to search the route (routing process can optimize the Total Production Cost. In the other hand, the use of ideal operation time to process a part as annealing schedule can control the processing time well. Abstract in Bahasa Indonesia : Flexible Manufacturing System (FMS adalah sistem manufaktur yang tersusun dari mesin-mesin Numerical Control (NC yang dikombinasi dengan Sistem Penanganan Material, sehingga job-job berbeda dikerjakan oleh mesin-mesin dengan alur yang berlainan. FMS menggabungkan produktifitas dan fleksibilitas yang tinggi dari Sistem Manufaktur Transfer Line dan Job Shop. Pada riset ini pendekatan Activity-Based Costing (ABC digunakan sebagai bobot / weight dalam pencarian rute operasi pada mesin yang tepat, untuk lebih mengoptimasi biaya produksi secara keseluruhan. Adapun metode Searching yang digunakan adalah Simulated Annealing yang merupakan varian dari metode searching Hill Climbing. Waktu operasi ideal untuk memproses sebuah part digunakan sebagai Annealing Schedulenya. Dari hasil pengujian empiris dapat dibuktikan bahwa penggunaan pendekatan ABC dan Simulated Annealing untuk proses pencarian rute (routing dapat lebih
Computer simulation of laser annealing of a nanostructured surface
Ivanov, D.; Marinov, I.; Gorbachev, Y.; Smirnov, A.; Krzhizhanovskaya, V.
2010-01-01
Laser annealing technology is used in mass production of new-generation semiconductor materials and nano-electronic devices like the MOS-based (metal-oxide-semiconductor) integrated circuits. Manufacturing sub-100 nm MOS devices demands application of ultra-shallow doping (junctions), which requires
Theodorakos, I.; Zergioti, I.; Vamvakas, V.; Tsoukalas, D.; Raptis, Y. S.
2014-01-01
In this work, a picosecond diode pumped solid state laser and a nanosecond Nd:YAG laser have been used for the annealing and the partial nano-crystallization of an amorphous silicon layer. These experiments were conducted as an alternative/complementary to plasma-enhanced chemical vapor deposition method for fabrication of micromorph tandem solar cell. The laser experimental work was combined with simulations of the annealing process, in terms of temperature distribution evolution, in order to predetermine the optimum annealing conditions. The annealed material was studied, as a function of several annealing parameters (wavelength, pulse duration, fluence), as far as it concerns its structural properties, by X-ray diffraction, SEM, and micro-Raman techniques.
Green, P. L.
2015-02-01
This work details the Bayesian identification of a nonlinear dynamical system using a novel MCMC algorithm: 'Data Annealing'. Data Annealing is similar to Simulated Annealing in that it allows the Markov chain to easily clear 'local traps' in the target distribution. To achieve this, training data is fed into the likelihood such that its influence over the posterior is introduced gradually - this allows the annealing procedure to be conducted with reduced computational expense. Additionally, Data Annealing uses a proposal distribution which allows it to conduct a local search accompanied by occasional long jumps, reducing the chance that it will become stuck in local traps. Here it is used to identify an experimental nonlinear system. The resulting Markov chains are used to approximate the covariance matrices of the parameters in a set of competing models before the issue of model selection is tackled using the Deviance Information Criterion.
Speagle, Joshua S; Eisenstein, Daniel J; Masters, Daniel C; Steinhardt, Charles L
2015-01-01
Using a grid of $\\sim 2$ million elements ($\\Delta z = 0.005$) adapted from COSMOS photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound rapid but "greedy" optimization schemes, even with additional stochastic sampling methods. In order to robustly and efficiently explore these surfaces, we develop BAD-Z [Brisk Annealing-Driven Redshifts (Z)], which combines ensemble Markov Chain Monte Carlo (MCMC) sampling with simulated annealing to sample arbitrarily large, pre-generated grids in approximately constant time. Using a mock catalog of 384,662 objects, we show BAD-Z samples $\\sim 40$ times more efficiently compared to a brute-force counterpart while maintaining similar levels of accuracy. Our results represent first steps toward designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation...
Simulation of annealing process effect on texture evolution of deep-drawing sheet St15
Jinghong Sun; Yazheng Liu; Leyu Zhou
2005-01-01
A two-dimensional cellular automaton method was used to simulate grain growth during the recrystallization annealing of deep-drawing sheet Stl5, taking the simulated result of recrystallization and the experimental result of the annealing texture of deepdrawing sheet St15 as the initial condition and reference. By means of computer simulation, the microstructures and textures of different periods of grain growth were predicted. It is achieved that the grain size, shape and texture become stable after the grain growth at a constant temperature of 700℃ for 10 h, and the advantaged texture components { 111 } and { 111 } are dominant.
Liang, Faming
2014-04-03
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.
Saboonchi, Ahmad [Department of Mechanical Engineering, Isfahan University of Technology, Isfahan 84154 (Iran); Hassanpour, Saeid [Rayan Tahlil Sepahan Co., Isfahan Science and Technology Town, Isfahan 84155 (Iran); Abbasi, Shahram [R and D Department, Mobarakeh Steel Complex, Isfahan (Iran)
2008-11-15
Cold rolled steel coils are annealed in batch furnaces to obtain desirable mechanical properties. Annealing operations involve heating and cooling cycles which take long due to high weight of the coils under annealing. To reduce annealing time, a simulation code was developed that is capable of evaluating more effective schedules for annealing coils during the heating process. This code is additionally capable of accurate determination of furnace turn-off time for different coil weights and charge dimensions. After studying many heating schedules and considering heat transfer mechanism in the annealing furnace, a new schedule with the most advantages was selected as the new operation conditions in the hydrogen annealing plant. The performance of all the furnaces were adjusted to the new heating schedule after experiments had been carried out to ensure the accuracy of the code and the fitness of the new operation condition. Comparison of similar yield of cold rolled coils over two months revealed that specific energy consumption of furnaces under the new heating schedule decreased by 11%, heating cycle time by 16%, and the hydrogen consumption by 14%. (author)
Simulated annealing algorithm for TSP%用模拟退火算法求解TSP
朱静丽
2011-01-01
货郎担问题，即TSP（Traveling Salesman Problem），是一个组合优化问题。具有NPC计算复杂性。本文分析了模拟退火算法模型，研究了用模拟退火算法求解TSP算法的可行性，并给出了用模拟退火算法求解TSP问题的具体实现方法。%Traveling salesman problem,that TSP（Travelling Salesman Problem）,is a combinatorial optimization problem.Computational complexity with the NPC.This paper analyzes the simulated annealing algorithm model to study the simulated annealing algorithm for TSP of the algorithm,and gives the simulated annealing algorithm for TSP on the specific implementation.
Cook, Darcy; Ferens, Ken; Kinsner, Witold
Simulated Annealing (SA) has shown to be a successful technique in optimization problems. It has been applied to both continuous function optimization problems, and combinatorial optimization problems. There has been some work in modifying the SA algorithm to apply properties of chaotic processes with the goal of reducing the time to converge to an optimal or a good solution. There are several variations of these chaotic simulated annealing (CSA) algorithms. In this paper a new variation of chaotic simulated annealing is proposed and is applied in solving a combinatorial optimization problem in multiprocessor task allocation. The experiments show the CSA algorithms reach a good solution faster than traditional SA algorithms in many cases because of a wider initial solution search.
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
高红民; 周惠; 徐立中; 石爱业
2014-01-01
A hybrid feature selection and classification strategy was proposed based on the simulated annealing genetic algorithm and multiple instance learning (MIL). The band selection method was proposed from subspace decomposition, which combines the simulated annealing algorithm with the genetic algorithm in choosing different cross-over and mutation probabilities, as well as mutation individuals. Then MIL was combined with image segmentation, clustering and support vector machine algorithms to classify hyperspectral image. The experimental results show that this proposed method can get high classification accuracy of 93.13%at small training samples and the weaknesses of the conventional methods are overcome.
Sousa, Tiago M; Soares, Tiago; Morais, Hugo
2016-01-01
The massive use of distributed generation and electric vehicles will lead to a more complex management of the power system, requiring new approaches to be used in the optimal resource scheduling field. Electric vehicles with vehicle-to-grid capability can be useful for the aggregator players...... of the aggregator total operation costs. The case study considers a distribution network with 33-bus, 66 distributed generation and 2000 electric vehicles. The proposed simulated annealing is matched with a deterministic approach allowing an effective and efficient comparison. The simulated annealing presents...
Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup
2011-01-01
The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology...... of the networks. There have been many use of evolutionary algorithms to solve the problems which are in combinatory complexity nature, and extremely hard to solve by exact approaches. Both Genetic and Simulated annealing algorithms are similar in using controlled stochastic method to search the solution....... The paper combines the algorithms in order to analyze the impact of implementation performance....
CHUShuchuan; JohnF.Roddick
2003-01-01
In this paper, a cluster generation algorithm for vector quantization using a tabu search approach with simulated annealing is proposed. The main iclea of this algorithm is to use the tabu search approach to gen-erate non-local moves for the clusters and apply the sim-ulated annealing technique to select the current best solu-tion, thus improving the cluster generation and reducing the mean squared error. Preliminary experimental results demonstrate that the proposed approach is superior to the tabu search approach with Generalised Lloyd algorithm.
Juan Frausto-Solis
2016-01-01
Full Text Available A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP instances. This new approach has four phases: (i Multiquenching Phase (MQP, (ii Boltzmann Annealing Phase (BAP, (iii Bose-Einstein Annealing Phase (BEAP, and (iv Dynamical Equilibrium Phase (DEP. BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Optimal design of hydraulic manifold blocks based on niching genetic simulated annealing algorithm
Jia Chunqiang; Yu Ling; Tian Shujun; Gao Yanming
2007-01-01
To solve the combinatorial optimization problem of outer layout and inner connection integrated schemes in the design of hydraulic manifold blocks(HMB),a hybrid genetic simulated annealing algorithm based on niche technology is presented.This hybrid algorithm,which combines genetic algorithm,simulated annealing algorithm and niche technology,has a strong capability in global and local search,and all extrema can be found in a short time without strict requests for preferences.For the complex restricted solid spatial layout problems in HMB,the optimizing mathematical model is presented.The key technologies in the integrated layout and connection design of HMB,including the realization of coding,annealing operation and genetic operation,are discussed.The framework of HMB optimal design system based on hybrid optimization strategy is proposed.An example is given to testify the effectiveness and feasibility of the algorithm.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
Stochastic annealing simulation of copper under neutron irradiation
Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N. [Risoe National Lab., Roskilde (Denmark)
1998-03-01
This report is a summary of a presentation made at ICFRM-8 on computer simulations of defect accumulation during irradiation of copper to low doses at room temperature. The simulation results are in good agreement with experimental data on defect cluster densities in copper irradiated in RTNS-II.
Using genetic/simulated annealing algorithm to solve disassembly sequence planning
Wu Hao; Zuo Hongfu
2009-01-01
disassembly sequence.And the solution methodology based on the genetic/simulated annealing algorithm with binary-tree algorithm is given.Finally,an example is analyzed in detail,and the result shows that the model is correct and efficient.
Thamilselvan Rakkiannan
2012-01-01
Full Text Available Problem statement: The Job Shop Scheduling Problem (JSSP is observed as one of the most difficult NP-hard, combinatorial problem. The problem consists of determining the most efficient schedule for jobs that are processed on several machines. Approach: In this study Genetic Algorithm (GA is integrated with the parallel version of Simulated Annealing Algorithm (SA is applied to the job shop scheduling problem. The proposed algorithm is implemented in a distributed environment using Remote Method Invocation concept. The new genetic operator and a parallel simulated annealing algorithm are developed for solving job shop scheduling. Results: The implementation is done successfully to examine the convergence and effectiveness of the proposed hybrid algorithm. The JSS problems tested with very well-known benchmark problems, which are considered to measure the quality of proposed system. Conclusion/Recommendations: The empirical results show that the proposed genetic algorithm with simulated annealing is quite successful to achieve better solution than the individual genetic or simulated annealing algorithm."
A Simulated Annealing Algorithm for Maximum Common Edge Subgraph Detection in Biological Networks
Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik
2016-01-01
introduce a heuristic algorithm for the multiple maximum common edge subgraph problem that is able to detect large common substructures shared across multiple, real-world size networks efficiently. Our algorithm uses a combination of iterated local search, simulated annealing and a pheromone...
Improving Simulated Annealing by Recasting it as a Non-Cooperative Game
Wolpert, David; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.
An Evaluation of a Modified Simulated Annealing Algorithm for Various Formulations
1990-08-01
plans and procedures for the improved operation of existing systems ( Reklaitis , Ravindran, & Ragsdell, 1983)." A gas pipeline flow problem is used to...simulated annealings, Journal of Statistical Physics, 45(5/6), 885-890. Reklaitis , G. V., Ravindran, A., & Ragsdell, K. M. (1983). Engineering
Simulated Annealing Genetic Algorithm Based Schedule Risk Management of IT Outsourcing Project
Fuqiang Lu
2017-01-01
Full Text Available IT outsourcing is an effective way to enhance the core competitiveness for many enterprises. But the schedule risk of IT outsourcing project may cause enormous economic loss to enterprise. In this paper, the Distributed Decision Making (DDM theory and the principal-agent theory are used to build a model for schedule risk management of IT outsourcing project. In addition, a hybrid algorithm combining simulated annealing (SA and genetic algorithm (GA is designed, namely, simulated annealing genetic algorithm (SAGA. The effect of the proposed model on the schedule risk management problem is analyzed in the simulation experiment. Meanwhile, the simulation results of the three algorithms GA, SA, and SAGA show that SAGA is the most superior one to the other two algorithms in terms of stability and convergence. Consequently, this paper provides the scientific quantitative proposal for the decision maker who needs to manage the schedule risk of IT outsourcing project.
Synthesis of optimal digital shapers with arbitrary noise using simulated annealing
Regadío, Alberto, E-mail: aregadio@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain); Sánchez-Prieto, Sebastián, E-mail: sebastian.sanchez@uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Tabero, Jesús, E-mail: taberogj@inta.es [Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain)
2014-02-21
This paper presents the structure, design and implementation of a new way of determining the optimal shaping in time-domain for spectrometers by means of simulated annealing. The proposed algorithm is able to adjust automatically and in real-time the coefficients for shaping an input signal. A practical prototype was designed, implemented and tested on a PowerPC 405 embedded in a Field Programmable Gate Array (FPGA). Lastly, its performance and capabilities were measured using simulations and a neutron monitor.
Wang Hongkai; Guan Yanyong; Xue Peijun
2008-01-01
In rough communication, because each agent has a different language and cannot provide precise communication to each other, the concept translated among multi-agents will loss some information and this results in a less or rougher concept. With different translation sequences, the problem of information loss is varied. To get the translation sequence, in which the jth agent taking part in rough communication gets maximum information, a simulated annealing algorithm is used. Analysis and simulation of this algorithm demonstrate its effectiveness.
Destya Arisetyanti
2012-09-01
Full Text Available Standar Digital Video Broadcasting Terrestrial (DVB-T diimplementasikan pada konfigurasi Single Frequency Network (SFN dimana seluruh pemancar pada sebuah jaringan beroperasi pada kanal frekuensi yang sama dan ditransmisikan pada waktu yang sama. SFN lebih dipilih daripada sistem pendahulunya yaitu Multi Frequency Network (MFN karena menggunakan frekuensi yang lebih efisien serta jangkauan area cakupan yang lebih luas. Pada sisi penerima memungkinkan adanya skenario multipath dengan menggabungkan sinyal dari pemancar yang berbeda karena konfigurasi SFN ini berbasis Orthogonal Frequency Division Multiplexing (OFDM. Pada penelitian ini, data ketinggian dan jumlah gedung melalui model prediksi propagasi free space dan knife edge akan diterapkan untuk memperkirakan nilai daya terima dan delay sinyal. Perhitungan nilai carrier (C dan carrier to interference (C/I dilakukan untuk mengetahui kualitas sinyal pada sisi penerima. Selanjutnya, optimasi parameter lokasi pemancar diterapkan oleh algoritma Simulated Annealing dengan menggunakan tiga cooling schedule terbaik. Simulated Annealing merupakan algoritma optimasi berdasarkan sistem termodinamika yang mensimulasikan proses annealing. Simulated Annealing telah berhasil memperluas daerah cakupan SFN. Hal ini dibuktikan dengan berkurangnya sebagian besar titik receiver dengan kualitas sinyal dibawah threshold.
Binocular adaptive optics visual simulator.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2009-09-01
A binocular adaptive optics visual simulator is presented. The instrument allows for measuring and manipulating ocular aberrations of the two eyes simultaneously, while the subject performs visual testing under binocular vision. An important feature of the apparatus consists on the use of a single correcting device and wavefront sensor. Aberrations are controlled by means of a liquid-crystal-on-silicon spatial light modulator, where the two pupils of the subject are projected. Aberrations from the two eyes are measured with a single Hartmann-Shack sensor. As an example of the potential of the apparatus for the study of the impact of the eye's aberrations on binocular vision, results of contrast sensitivity after addition of spherical aberration are presented for one subject. Different binocular combinations of spherical aberration were explored. Results suggest complex binocular interactions in the presence of monochromatic aberrations. The technique and the instrument might contribute to the better understanding of binocular vision and to the search for optimized ophthalmic corrections.
Chang Li
2014-01-01
Full Text Available Much of the previous work in D-optimal design for regression models with correlated errors focused on polynomial models with a single predictor variable, in large part because of the intractability of an analytic solution. In this paper, we present a modified, improved simulated annealing algorithm, providing practical approaches to specifications of the annealing cooling parameters, thresholds, and search neighborhoods for the perturbation scheme, which finds approximate D-optimal designs for 2-way and 3-way polynomial regression for a variety of specific correlation structures with a given correlation coefficient. Results in each correlated-errors case are compared with traditional simulated annealing algorithm, that is, the SA algorithm without our improvement. Our improved simulated annealing results had generally higher D-efficiency than traditional simulated annealing algorithm, especially when the correlation parameter was well away from 0.
Optimal Lead-lag Controller for Distributed Generation Unit in Island Mode Using Simulated Annealing
A. Akbarimajd
2014-07-01
Full Text Available Active and reactive power components of a Distributed Generation (DG is normally controlled by a conventional dq-current control strategy however, after islanding the dq-current which is not able to successfully complete the control task is disabled and a lead-lag control strategy based optimized by simulated annealing is proposed for control of DG unit in islanding mode. Integral of Time multiply by Absolute Error (ITEA criterion is used as cost function of simulated annealing in order to achieve smooth response and robust behavior. The proposed controller improved robust stability margins of the system. Simulations with different load and input operating conditions verify advantages of the proposed controller in comparison with a previously developed classic controller in terms of robustness and response time.
Sheng Lu
2015-01-01
Full Text Available To solve the problem of parameter selection during the design of magnetically coupled resonant wireless power transmission system (MCR-WPT, this paper proposed an improved genetic simulated annealing algorithm. Firstly, the equivalent circuit of the system is analysis in this study and a nonlinear programming mathematical model is built. Secondly, in place of the penalty function method in the genetic algorithm, the selection strategy based on the distance between individuals is adopted to select individual. In this way, it reduces the excess empirical parameters. Meanwhile, it can improve the convergence rate and the searching ability by calculating crossover probability and mutation probability according to the variance of population’s fitness. At last, the simulated annealing operator is added to increase local search ability of the method. The simulation shows that the improved method can break the limit of the local optimum solution and get the global optimum solution faster. The optimized system can achieve the practical requirements.
Ohzeki, Masayuki
2017-01-01
Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians. PMID:28112244
Simulated annealing: an application in fine particle magnetism
Legeratos, A.; Chantrell, R.W.; Wohlfarth, E.P.
1985-07-01
Using a model of a system of interacting fine ferromagnetic particles, a computer simulation of the dynamical approach to local or global minima of the system is developed for two different schedules of the application of ac and dc magnetic fields. The process of optimization, i.e., the achievement of a global minimum, depends on the rate of reduction of the ac field and on the symmetry of the ac field cycles. The calculations carried out to illustrate these effects include remanence curves and the zero field remanence for both schedules under different conditions. The growth of the magnetization during these processes was studied, and the interaction energy was calculated to best illustrate the optimization.
The Adaptive Multi-scale Simulation Infrastructure
Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)
2015-09-01
The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Hayder Amer
2016-06-01
Full Text Available Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej
2016-04-01
Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.
Corazza, S; Mündermann, L; Chaudhari, A M; Demattio, T; Cobelli, C; Andriacchi, T P
2006-06-01
Human motion capture is frequently used to study musculoskeletal biomechanics and clinical problems, as well as to provide realistic animation for the entertainment industry. The most popular technique for human motion capture uses markers placed on the skin, despite some important drawbacks including the impediment to the motion by the presence of skin markers and relative movement between the skin where the markers are placed and the underlying bone. The latter makes it difficult to estimate the motion of the underlying bone, which is the variable of interest for biomechanical and clinical applications. A model-based markerless motion capture system is presented in this study, which does not require the placement of any markers on the subject's body. The described method is based on visual hull reconstruction and an a priori model of the subject. A custom version of adapted fast simulated annealing has been developed to match the model to the visual hull. The tracking capability and a quantitative validation of the method were evaluated in a virtual environment for a complete gait cycle. The obtained mean errors, for an entire gait cycle, for knee and hip flexion are respectively 1.5 degrees (+/-3.9 degrees ) and 2.0 degrees (+/-3.0 degrees ), while for knee and hip adduction they are respectively 2.0 degrees (+/-2.3 degrees ) and 1.1 degrees (+/-1.7 degrees ). Results for the ankle and shoulder joints are also presented. Experimental results captured in a gait laboratory with a real subject are also shown to demonstrate the effectiveness and potential of the presented method in a clinical environment.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing.
Ahmad Abubaker
Full Text Available This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, "MOPSOSA". The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO and the Multi-Objective Simulated Annealing (MOSA. Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing.
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, "MOPSOSA". The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets.
Fast and accurate protein substructure searching with simulated annealing and GPUs
Stivala Alex D
2010-09-01
Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.
Optimización Global Simulated Annealing
Francisco Sánchez Mares
2006-01-01
Full Text Available El presente trabajo muestra la aplicación del método de optimización global Simulated Annealing (SA. Esta técnica ha sido aplicada en diversas áreas de la ingeniería como una estrategia robusta y versátil para calcular con éxito el mínimo global de una función o un sistema de funciones. Para probar la eficiencia del método se encontraron los mínimos globales de una función arbitraria y se evaluó el comportamiento numérico del Simulated Annealing durante la convergencia a las dos soluciones que presenta el caso de estudio.
Jin Shi-Feng; Wang Wei-Min; Zhou Jian-Kun; Guo Hong-Xuan; J.F. Webb; Bian Xiu-Fang
2005-01-01
The nanocrystallization behaviour of Zr70Cu20Ni10 metallic glass during isothermal annealing is studied by employing a Monte Carlo simulation incorporating with a modified Ising model and a Q-state Potts model. Based on the simulated microstructure and differential scanning calorimetry curves, we find that the low crystal-amorphous interface energy of Ni plays an important role in the nanocrystallization of primary Zr2Ni. It is found that when T ＜ TImax (where TImax is the temperature with maximum nucleation rate), the increase of temperature results in a larger growth rate and a much finer microstructure for the primary Zr2Ni, which accords with the microstructure evolution in "flash annealing". Finally, the Zr2Ni/Zr2Cu interface energy σG contributes to the pinning effect of the primary nano-sized Zr2Ni grains in the later formed normal Zr2Cu grains.
Research on coal-mine gas monitoring system controlled by annealing simulating algorithm
Zhou, Mengran; Li, Zhenbi
2007-12-01
This paper introduces the principle and schematic diagram of gas monitoring system by means of infrared method. Annealing simulating algorithm is adopted to find the whole optimum solution and the Metroplis criterion is used to make iterative algorithm combination optimization by control parameter decreasing aiming at solving large-scale combination optimization problem. Experiment result obtained by the performing scheme of realizing algorithm training and flow of realizing algorithm training indicates that annealing simulating algorithm applied to identify gas is better than traditional linear local search method. It makes the algorithm iterate to the optimum value rapidly so that the quality of the solution is improved efficiently. The CPU time is shortened and the identifying rate of gas is increased. For the mines with much-gas gushing fatalness the regional danger and disaster advanced forecast can be realized. The reliability of coal-mine safety is improved.
吴剑锋; 朱学愚; 刘建立
1999-01-01
The genetic algorithm (GA) is a global and random search procedure based on the mechanics of natural selection and natural genetics. A new optimization method of the genetic algorithm-based simulated annealing penalty function (GASAPF) is presented to solve groundwater management model. Compared with the traditional gradient-based algorithms, the GA is straightforward and there is no need to calculate derivatives of the objective function. The GA is able to generate both convex and nonconvex points within the feasible region. It can be sure that the GA converges to the global or at least near-global optimal solution to handle the constraints by simulated annealing technique. Maximum pumping example results show that the GASAPF to solve optimization model is very efficient and robust.
Salcedo-Sanz, Sancho; Santiago-Mozos, Ricardo; Bousoño-Calzón, Carlos
2004-04-01
A hybrid Hopfield network-simulated annealing algorithm (HopSA) is presented for the frequency assignment problem (FAP) in satellite communications. The goal of this NP-complete problem is minimizing the cochannel interference between satellite communication systems by rearranging the frequency assignment, for the systems can accommodate the increasing demands. The HopSA algorithm consists of a fast digital Hopfield neural network which manages the problem constraints hybridized with a simulated annealing which improves the quality of the solutions obtained. We analyze the problem and its formulation, describing and discussing the HopSA algorithm and solving a set of benchmark problems. The results obtained are compared with other existing approaches in order to show the performance of the HopSA approach.
Paul, Gerald
2010-01-01
For almost two decades the question of whether tabu search (TS) or simulated annealing (SA) performs better for the quadratic assignment problem has been unresolved. To answer this question satisfactorily, we compare performance at various values of targeted solution quality, running each heuristic at its optimal number of iterations for each target. We find that for a number of varied problem instances, SA performs better for higher quality targets while TS performs better for lower quality targets.
Kohei Arai
2012-07-01
Full Text Available Method for geophysical parameter estimations with microwave radiometer data based on Simulated Annealing: SA is proposed. Geophysical parameters which are estimated with microwave radiometer data are closely related each other. Therefore simultaneous estimation makes constraints in accordance with the relations. On the other hand, SA requires huge computer resources for convergence. In order to accelerate convergence process, oscillated decreasing function is proposed for cool down function. Experimental results show that remarkable improvements are observed for geophysical parameter estimations.
Paul, Gerald
2011-01-01
The quadratic assignment problem (QAP) is one of the most difficult combinatorial optimization problems. One of the most powerful and commonly used heuristics to obtain approximations to the optimal solution of the QAP is simulated annealing (SA). We present an efficient implementation of the SA heuristic which performs more than 100 times faster then existing implementations for large problem sizes and a large number of SA iterations.
A GPU implementation of the Simulated Annealing Heuristic for the Quadratic Assignment Problem
Paul, Gerald
2012-01-01
The quadratic assignment problem (QAP) is one of the most difficult combinatorial optimization problems. An effective heuristic for obtaining approximate solutions to the QAP is simulated annealing (SA). Here we describe an SA implementation for the QAP which runs on a graphics processing unit (GPU). GPUs are composed of low cost commodity graphics chips which in combination provide a powerful platform for general purpose parallel computing. For SA runs with large numbers of iterations, we fi...
Akbar, Akhmad Fanani; Nugraha, Andri Dian; Sule, Rachmat; Juanda, Aditya Abdurrahman
2013-09-01
Hypocenter determination of micro-earthquakes of Mount "X-1" geothermal field has been conducted using simulated annealing and guided error search method using a 1D seismic velocity model. In order to speed up the hypocenter determination process a three-circle intersection method has been used to guide the simulated annealing and guided error search process. We used P and S arrival time's microseismic data. In the simulated annealing and guided error search processes, the minimum travel time from a source to a receiver has been calculated by employing ray tracing with shooting method. The resulting hypocenters from the above process occurred at depths of 3-4 km below mean sea level. These hypocenter distributions are correlated with previous study which was concluded that the most active microseismic area in which the site of many fractures and also vertical circulation place. Later on, resulting hypocenters location was used as input to determine 1-D seismic velocity using joint hypocenter determination method. The results of VELEST indicate show low Vp/Vs ratio value at depths of 3-4 km. Our interpretation is this anomaly may be related to a rock layer which is saturated by vapor (gas or steam). Another feature is high Vp/Vs ratio value at depths of 1-3 km that may related to a rock layer which is saturated by fluid or partial melting. We also analyze the focal mechanism of microseismic using ISOLA method to determine the source characteristic of this event.
Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.
2016-10-01
Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.
Sousa, Tiago M; Morais, Hugo; Castro, R.
2014-01-01
to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach......An intensive use of dispersed energy resources is expected for future power systems, including distributed generation, especially based on renewable sources, and electric vehicles. The system operation methods and tool must be adapted to the increased complexity, especially the optimal resource...... scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Erler, Axel; Wegmann, Susanne; Elie-Caille, Celine; Bradshaw, Charles Richard; Maresca, Marcello; Seidel, Ralf; Habermann, Bianca; Muller, Daniel J; Stewart, A Francis
2009-08-21
Single-strand annealing proteins, such as Redbeta from lambda phage or eukaryotic Rad52, play roles in homologous recombination. Here, we use atomic force microscopy to examine Redbeta quaternary structure and Redbeta-DNA complexes. In the absence of DNA, Redbeta forms a shallow right-handed helix. The presence of single-stranded DNA (ssDNA) disrupts this structure. Upon addition of a second complementary ssDNA, annealing generates a left-handed helix that incorporates 14 Redbeta monomers per helical turn, with each Redbeta monomer annealing approximately 11 bp of DNA. The smallest stable annealing intermediate requires 20 bp DNA and two Redbeta monomers. Hence, we propose that Redbeta promotes base pairing by first increasing the number of transient interactions between ssDNAs. Then, annealing is promoted by the binding of a second Redbeta monomer, which nucleates the formation of a stable annealing intermediate. Using threading, we identify sequence similarities between the RecT/Redbeta and the Rad52 families, which strengthens previous suggestions, based on similarities of their quaternary structures, that they share a common mode of action. Hence, our findings have implications for a common mechanism of DNA annealing mediated by single-strand annealing proteins including Rad52.
Thin film design using simulated annealing and study of the filter robustness
Boudet, Thierry; Chaton, Patrick
1996-08-01
Modern optical components require sophisticated coatings with tough specifications and the design of optical multilayers has become a key activity of most laboratories and factories. A synthesis technique based on the simulated annealing algorithm is presented here. In this stochastic minimization, no starting solution is required, only the materials and technological constraints need to be specified. Moreover, the algorithm will always reach the final result. As simulated annealing is a stochastic algorithm, a great amount of state transitions is needed in order to reach a global minimum of the merit function used to evaluate the difference between the optical target and the calculated filter. Anyway the computing time remains reasonable on a work-station. A few examples will show the performances of our program. It also has to be pointed out that no refinement is needed at the end of the annealing because the solution is already highly optimized. Nowadays the design of robust filters with low sensitivity to technological variations remains a key factor for manufacturers. This is why we have established some criteria that quantify the robustness of the stacks. It also enables comparison of multilayers synthesized by different methods and corresponding to the same target.
P. Wang
2015-04-01
Full Text Available In this paper, we introduce a novel image reconstruction algorithm with Least Squares Support Vector Machines (LS-SVM and Simulated Annealing Particle Swarm Optimization (APSO, named SAP. This algorithm introduces simulated annealing ideas into Particle Swarm Optimization (PSO, which adopts cooling process functions to replace the inertia weight function and constructs the time variant inertia weight function featured in annealing mechanism. Meanwhile, it employs the APSO procedure to search for the optimized resolution of Electrical Capacitance Tomography (ECT for image reconstruction. In order to overcome the soft field characteristics of ECT sensitivity field, some image samples with typical flow patterns are chosen for training with LS-SVM. Under the training procedure, the capacitance error caused by the soft field characteristics is predicted, and then is used to construct the fitness function of the particle swarm optimization on basis of the capacitance error. Experimental results demonstrated that the proposed SAP algorithm has a quick convergence rate. Moreover, the proposed SAP outperforms the classic Landweber algorithm and Newton-Raphson algorithm on image reconstruction.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Guijarro, María; Pajares, Gonzalo; Herrera, P Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm.
Nandipati, Giridhar, E-mail: giridhar.nandipati@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA (United States); Setyawan, Wahyu; Heinisch, Howard L. [Pacific Northwest National Laboratory, Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Laboratory, Richland, WA (United States); Department of Physics, University of Washington, Seattle, WA 98195 (United States); Kurtz, Richard J. [Pacific Northwest National Laboratory, Richland, WA (United States); Wirth, Brian D. [University of Tennessee, Knoxville, TN (United States)
2015-07-15
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-07-01
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
Adaptive resolution simulation of oligonucleotides
Netz, Paulo A.; Potestio, Raffaello; Kremer, Kurt
2016-12-01
Nucleic acids are characterized by a complex hierarchical structure and a variety of interaction mechanisms with other molecules. These features suggest the need of multiscale simulation methods in order to grasp the relevant physical properties of deoxyribonucleic acid (DNA) and RNA using in silico experiments. Here we report an implementation of a dual-resolution modeling of a DNA oligonucleotide in physiological conditions; in the presented setup only the nucleotide molecule and the solvent and ions in its proximity are described at the atomistic level; in contrast, the water molecules and ions far from the DNA are represented as computationally less expensive coarse-grained particles. Through the analysis of several structural and dynamical parameters, we show that this setup reliably reproduces the physical properties of the DNA molecule as observed in reference atomistic simulations. These results represent a first step towards a realistic multiscale modeling of nucleic acids and provide a quantitatively solid ground for their simulation using dual-resolution methods.
Kumar, Pushpendra; Huber, Patrick
2016-04-01
Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.
Simulated annealing algorithm for solving chambering student-case assignment problem
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
Comparing of the Deterministic Simulated Annealing Methods for Quadratic Assignment Problem
Mehmet Güray ÜNSAL
2013-08-01
Full Text Available In this study, Threshold accepting and Record to record travel methods belonging to Simulated Annealing that is meta-heuristic method by applying Quadratic Assignment Problem are statistically analyzed whether they have a significant difference with regard to the values of these two methods target functions and CPU time. Between the two algorithms, no significant differences are found in terms of CPU time and the values of these two methods target functions. Consequently, on the base of Quadratic Assignment Problem, the two algorithms are compared in the study have the same performance in respect to CPU time and the target functions values
Design of phase plates for shaping partially coherent beams by simulated annealing
Li Jian-Long; Lü Bai-Da
2008-01-01
Taking the Gaussian Schell-model beam as a typical example of partially coherent beams,this paper applies the simulated annealing (SA) algorithm to the design of phase plates for shaping partially coherent beams.A flow diagram is presented to illustrate the procedure of phase optimization by the SA algorithm.Numerical examples demonstrate the advantages of the SA algorithm in shaping partially coherent beams.An uniform flat-topped beam profile with maximum reconstruction error RE < 1.74% is achieved.A further extension of the approach is discussed.
Total lineshape analysis of high-resolution NMR spectra powered by simulated annealing
Cheshkov, D. A.; Sinitsyn, D. O.; Sheberstov, K. F.; Chertkov, V. A.
2016-11-01
The novel algorithm for a total lineshape analysis of high-resolution NMR spectra has been developed. A global optimization by simulated annealing has been applied that has allowed to overcome the main trouble of common approaches which had frequently returned solutions for local minima rather than for global ones. The algorithm has been verified for the four-spin test systems ABCD, and has been successfully used for analysis of experimental NMR spectra of proline. The approach has allowed to avoid a sophisticated manual setup of initial parameters and to conduct the analysis of complicated high-resolution NMR spectra nearly automatically.
Zhao Zhi-Jin; Zheng Shi-Lian; Xu Chun-Yun; Kong Xian-Zheng
2007-01-01
Hidden Markov models (HMMs) have been used to model burst error sources of wireless channels. This paper proposes a hybrid method of using genetic algorithm (GA) and simulated annealing (SA) to train HMM for discrete channel modelling. The proposed method is compared with pure GA, and experimental results show that the HMMs trained by the hybrid method can better describe the error sequences due to SA's ability of facilitating hill-climbing at the later stage of the search. The burst error statistics of the HMMs trained by the proposed method and the corresponding error sequences are also presented to validate the proposed method.
Simulated annealing applied to two-dimensional low-beta reduced magnetohydrodynamics
Chikasue, Y., E-mail: chikasue@ppl.k.u-tokyo.ac.jp [Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561 (Japan); Furukawa, M., E-mail: furukawa@damp.tottori-u.ac.jp [Graduate School of Engineering, Tottori University, Minami 4-101, Koyama-cho, Tottori-shi, Tottori 680-8552 (Japan)
2015-02-15
The simulated annealing (SA) method is applied to two-dimensional (2D) low-beta reduced magnetohydrodynamics (R-MHD). We have successfully obtained stationary states of the system numerically by the SA method with Casimir invariants preserved. Since the 2D low-beta R-MHD has two fields, the relaxation process becomes complex compared to a single field system such as 2D Euler flow. The obtained stationary state can have fine structure. We have found that the fine structure appears because the relaxation processes are different between kinetic energy and magnetic energy.
Fleischer, M.; Jacobson, S.
1994-12-31
This paper presents a new empirical approach designed to illustrate the theory developed in Fleischer and Jacobson regarding entropy measures and the finite-time performance of the simulated annealing (SA) algorithm. The theory is tested using several experimental methodologies based on a new structure, generic configuration spaces, and polynomial transformations between NP-hard problems. Both approaches provide several ways to alter the configuration space and its associated entropy measure while preserving the value of the globally optimal solution. This makes it possible to illuminate the extent to which entropy measures impact the finite-time performance of the SA algorithm.
Hansen, S H
2004-01-01
We present a user-friendly tool for the analysis of data from Sunyaev-Zeldovich effect observations. The tool is based on the stochastic method of simulated annealing, and allows the extraction of the central values and error-bars of the 3 SZ parameters, Comptonization parameter, y, peculiar velocity, v_p, and electron temperature, T_e. The f77-code SASZ will allow any number of observing frequencies and spectral band shapes. As an example we consider the SZ parameters for the COMA cluster.
Kerr, I. D.; Sankararamakrishnan, R; Smart, O.S.; Sansom, M S
1994-01-01
A parallel bundle of transmembrane (TM) alpha-helices surrounding a central pore is present in several classes of ion channel, including the nicotinic acetylcholine receptor (nAChR). We have modeled bundles of hydrophobic and of amphipathic helices using simulated annealing via restrained molecular dynamics. Bundles of Ala20 helices, with N = 4, 5, or 6 helices/bundle were generated. For all three N values the helices formed left-handed coiled coils, with pitches ranging from 160 A (N = 4) to...
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)
2015-02-03
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
Design and optimization of solid rocket motor Finocyl grain using simulated annealing
Ali Kamran; LIANG Guo-zhu
2011-01-01
The research effort outlined the application of a computer aided design (CAD)-centric technique to the design and optimization of solid rocket motor Finocyl (fin in cylinder) grain using simulated annealing.The proper method for constructing the grain configuration model, ballistic performance and optimizer integration for analysis was presented. Finoeyl is a complex grain configuration, requiring thirteen variables to define the geometry. The large number of variables not only complicates the geometrical construction but also optimization process. CAD representation encapsulates all of the geometric entities pertinent to the grain design in a parametric way, allowing manipulation of grain entity (web), performing regression and automating geometrical data calculations. Robustness to avoid local minima and efficient capacity to explore design space makes simulated annealing an attractive choice as optimizer. It is demonstrated with a constrained optimization of Finocyl grain geometry for homogeneous, isotropic propellant, uniform regression, and a quasi-steady, bulk mode internal ballistics model that maximizes average thrust for required deviations from neutrality.
Temporary Workforce Planning with Firm Contracts: A Model and a Simulated Annealing Heuristic
Muhammad Al-Salamah
2011-01-01
Full Text Available The aim of this paper is to introduce a model for temporary staffing when temporary employment is managed by firm contracts and to propose a simulated annealing-based method to solve the model. Temporary employment is a policy frequently used to adjust the working hour capacity to fluctuating demand. Temporary workforce planning models have been unnecessarily simplified to account for only periodic hiring and laying off; a company can review its workforce requirement every period and make hire-fire decisions accordingly, usually with a layoff cost. We present a more realistic temporary workforce planning model that assumes a firm contract between the worker and the company, which can extend to several periods. The model assumes the traditional constraints, such as inventory balance constraints, worker availability, and labor hour mix. The costs are the inventory holding cost, training cost of the temporary workers, and the backorder cost. The mixed integer model developed for this case has been found to be difficult to solve even for small problem sizes; therefore, a simulated annealing algorithm is proposed to solve the mixed integer model. The performance of the SA algorithm is compared with the CPLEX solution.
Kai Moriguchi
2015-01-01
Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.
A Simulated Annealing Based Location Area Optimization in Next Generation Mobile Networks
Vilmos Simon
2007-01-01
Full Text Available Mobile networks have faced rapid increase in the number of mobile users and the solution for supporting the growing population is to reduce the cell sizes and to increase the bandwidth reuse. This will cause the number of location management operations and call deliveries to increase significantly, and result in high signaling overhead. We focus on minimizing this overhead, by efficient Location Area Planning (LAP. In this paper we seek to determine the location areas to achieve the minimization of the registration cost, constrained by the paging cost. For that we propose a simulated annealing algorithm, which is applied on a basic Location Area partition of cells formed by a greedy algorithm. We used our realistic mobile environment simulator to generate input (cell changing and incoming call statistics for our algorithm, and by comparing the values of the registration cost function we recognized that significant reduction was achieved in the amount of the signaling traffic.
Research on Optimal Control for the Vehicle Suspension Based on the Simulated Annealing Algorithm
Jie Meng
2014-01-01
Full Text Available A method is designed to optimize the weight matrix of the LQR controller by using the simulated annealing algorithm. This method utilizes the random searching characteristics of the algorithm to optimize the weight matrices with the target function of suspension performance indexes. This method improves the design efficiency and control performance of the LQR control, and solves the problem of the LQR controller when defining the weight matrices. And a simulation is provided for vehicle active chassis control. The result shows that the active suspension using LQR optimized by the genetic algorithm compared to the chassis controlled by the normal LQR and the passive one, shows better performance. Meanwhile, the problem of defining the weight matrices is greatly solved.
Kinetic Monte Carlo simulations of boron activation in implanted Si under laser thermal annealing
Fisicaro, Giuseppe; Pelaz, Lourdes; Aboy, Maria; Lopez, Pedro; Italia, Markus; Huet, Karim; Cristiano, Filadelfo; Essa, Zahi; Yang, Qui; Bedel-Pereira, Elena; Quillec, Maurice; La Magna, Antonino
2014-02-01
We investigate the correlation between dopant activation and damage evolution in boron-implanted silicon under excimer laser irradiation. The dopant activation efficiency in the solid phase was measured under a wide range of irradiation conditions and simulated using coupled phase-field and kinetic Monte Carlo models. With the inclusion of dopant atoms, the presented code extends the capabilities of a previous version, allowing its definitive validation by means of detailed comparisons with experimental data. The stochastic method predicts the post-implant kinetics of the defect-dopant system in the far-from-equilibrium conditions caused by laser irradiation. The simulations explain the dopant activation dynamics and demonstrate that the competitive dopant-defect kinetics during the first laser annealing treatment dominates the activation phenomenon, stabilizing the system against additional laser irradiation steps.
Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu
2001-08-01
This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)
Adaptive Optics Simulations for Siding Spring
Goodwin, Michael; Lambert, Andrew
2012-01-01
Using an observational derived model optical turbulence profile (model-OTP) we have investigated the performance of Adaptive Optics (AO) at Siding Spring Observatory (SSO), Australia. The simulations cover the performance for AO techniques of single conjugate adaptive optics (SCAO), multi-conjugate adaptive optics (MCAO) and ground-layer adaptive optics (GLAO). The simulation results presented in this paper predict the performance of these AO techniques as applied to the Australian National University (ANU) 2.3 m and Anglo-Australian Telescope (AAT) 3.9 m telescopes for astronomical wavelength bands J, H and K. The results indicate that AO performance is best for the longer wavelengths (K-band) and in the best seeing conditions (sub 1-arcsecond). The most promising results are found for GLAO simulations (field of view of 180 arcsecs), with the field RMS for encircled energy 50% diameter (EE50d) being uniform and minimally affected by the free-atmosphere turbulence. The GLAO performance is reasonably good over...
Minimizing distortion and internal forces in truss structures by simulated annealing
Kincaid, Rex K.
1989-01-01
Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and
Retrieval of Surface and Subsurface Moisture of Bare Soil Using Simulated Annealing
Tabatabaeenejad, A.; Moghaddam, M.
2009-12-01
Soil moisture is of fundamental importance to many hydrological and biological processes. Soil moisture information is vital to understanding the cycling of water, energy, and carbon in the Earth system. Knowledge of soil moisture is critical to agencies concerned with weather and climate, runoff potential and flood control, soil erosion, reservoir management, water quality, agricultural productivity, drought monitoring, and human health. The need to monitor the soil moisture on a global scale has motivated missions such as Soil Moisture Active and Passive (SMAP) [1]. Rough surface scattering models and remote sensing retrieval algorithms are essential in study of the soil moisture, because soil can be represented as a rough surface structure. Effects of soil moisture on the backscattered field have been studied since the 1960s, but soil moisture estimation remains a challenging problem and there is still a need for more accurate and more efficient inversion algorithms. It has been shown that the simulated annealing method is a powerful tool for inversion of the model parameters of rough surface structures [2]. The sensitivity of this method to measurement noise has also been investigated assuming a two-layer structure characterized by the layers dielectric constants, layer thickness, and statistical properties of the rough interfaces [2]. However, since the moisture profile varies with depth, it is sometimes necessary to model the rough surface as a layered structure with a rough interface on top and a stratified structure below where each layer is assumed to have a constant volumetric moisture content. In this work, we discretize the soil structure into several layers of constant moisture content to examine the effect of subsurface profile on the backscattering coefficient. We will show that while the moisture profile could vary in deeper layers, these layers do not affect the scattered electromagnetic field significantly. Therefore, we can use just a few layers
Evaluating strong measurement noise in data series with simulated annealing method
Carvalho, J; Haase, M; Lind, P G
2013-01-01
Many stochastic time series can be described by a Langevin equation composed of a deterministic and a stochastic dynamical part. Such a stochastic process can be reconstructed by means of a recently introduced nonparametric method, thus increasing the predictability, i.e. knowledge of the macroscopic drift and the microscopic diffusion functions. If the measurement of a stochastic process is affected by additional strong measurement noise, the reconstruction process cannot be applied. Here, we present a method for the reconstruction of stochastic processes in the presence of strong measurement noise, based on a suitably parametrized ansatz. At the core of the process is the minimization of the functional distance between terms containing the conditional moments taken from measurement data, and the corresponding ansatz functions. It is shown that a minimization of the distance by means of a simulated annealing procedure yields better results than a previously used Levenberg-Marquardt algorithm, which permits a...
COLSS Axial Power Distribution Synthesis using Artificial Neural Network with Simulated Annealing
Shim, K. W.; Oh, D. Y.; Kim, D. S.; Choi, Y. J.; Park, Y. H. [KEPCO Nuclear Fuel Company, Inc., Daejeon (Korea, Republic of)
2015-05-15
The core operating limit supervisory system (COLSS) is an application program implemented into the plant monitoring system (PMS) of nuclear power plants (NPPs). COLSS aids the operator in maintaining plant operation within selected limiting conditions for operation (LCOs), such as the departure from nucleate boiling ratio (DNBR) margin and the linear heat rate (LHR) margin. In order to calculate above LCOs, the COLSS uses core averaged axial power distribution (APD). 40 nodes of APD is synthesized by using the 5-level in-core neutron flux detector signals based on the Fourier series method in the COLSS. We proposed the artificial neural network (ANN) with simulated annealing (SA) method instead of Fourier series method to synthesize the axial power distribution (APD) of COLSS. The proposed method is more accurate than the current method as the results of the axial shape RMS errors.
Engineering phase shifter domains for multiple QPM using simulated annealing algorithm
Siva, Chellappa; Sunder Meetei, Toijam; Shiva, Prabhakar; Narayanan, Balaji; Arvind, Ganesh; Boomadevi, Shanmugam; Pandiyan, Krishnamoorthy
2017-10-01
We have utilized the general algorithm of simulated annealing (SA) to engineer the phase shifter domains in a quasi-phase-matching (QPM) device to generate multiple frequency conversion. SA is an algorithm generally used to find the global maxima or minima in a given random function. Here, we have utilized this algorithm to generate multiple QPM second harmonic generation (SHG) by distributing phase shifters suitably. In general, phase shifters are distributed in a QPM device with some specific profile along the length to generate multiple QPM SHG. Using the SA algorithm, the location of these phase shifters can be easily identified to have the desired multiple QPM with higher conversion efficiency. The methodology to generate the desired multiple QPM SHG using the SA algorithm has been discussed in detail.
A hybrid Tabu search-simulated annealing method to solve quadratic assignment problem
Mohamad Amin Kaviani
2014-06-01
Full Text Available Quadratic assignment problem (QAP has been considered as one of the most complicated problems. The problem is NP-Hard and the optimal solutions are not available for large-scale problems. This paper presents a hybrid method using tabu search and simulated annealing technique to solve QAP called TABUSA. Using some well-known problems from QAPLIB generated by Burkard et al. (1997 [Burkard, R. E., Karisch, S. E., & Rendl, F. (1997. QAPLIB–a quadratic assignment problem library. Journal of Global Optimization, 10(4, 391-403.], two methods of TABUSA and TS are both coded on MATLAB and they are compared in terms of relative percentage deviation (RPD for all instances. The performance of the proposed method is examined against Tabu search and the preliminary results indicate that the hybrid method is capable of solving real-world problems, efficiently.
Fabrication of simulated plate fuel elements: Defining role of stress relief annealing
Kohli, D.; Rakesh, R.; Sinha, V. P.; Prasad, G. J.; Samajdar, I.
2014-04-01
This study involved fabrication of simulated plate fuel elements. Uranium silicide of actual fuel elements was replaced with yttria. The fabrication stages were otherwise identical. The final cold rolled and/or straightened plates, without stress relief, showed an inverse relationship between bond strength and out of plane residual shear stress (τ13). Stress relief of τ13 was conducted over a range of temperatures/times (200-500 °C and 15-240 min) and led to corresponding improvements in bond strength. Fastest τ13 relief was obtained through 300 °C annealing. Elimination of microscopic shear bands, through recovery and partial recrystallization, was clearly the most effective mechanism of relieving τ13.
Shape optimization of road tunnel cross-section by simulated annealing
Sobótka Maciej
2016-06-01
Full Text Available The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA. The form of a cost function derives from the energetic optimality condition, formulated in the authors’ previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
Simulated annealing for three-dimensional low-beta reduced MHD equilibria in cylindrical geometry
Furukawa, M
2016-01-01
Simulated annealing (SA) is applied for three-dimensional (3D) equilibrium calculation of ideal, low-beta reduced MHD in cylindrical geometry. The SA is based on the theory of Hamiltonian mechanics. The dynamical equation of the original system, low-beta reduced MHD in this study, is modified so that the energy changes monotonically while preserving the Casimir invariants in the artificial dynamics. An equilibrium of the system is given by an extremum of the energy, therefore SA can be used as a method for calculating ideal MHD equilibrium. Previous studies demonstrated that the SA succeeds to lead to various MHD equilibria in two dimensional rectangular domain. In this paper, the theory is applied to 3D equilibrium of ideal, low-beta reduced MHD. An example of equilibrium with magnetic islands, obtained as a lower energy state, is shown. Several versions of the artificial dynamics are developed that can effect smoothing.
An Archived Multi Objective Simulated Annealing Method to Discover Biclusters in Microarray Data
Mohsen Lashkargir
2011-01-01
Full Text Available With the advent of microarray technology it has been possible to measure thousands of expression values of genes in a single experiment. Analysis of large scale geonomics data, notably gene expression, has initially focused on clustering methods. Recently, biclustering techniques were proposed for revealing submatrices showing unique patterns. Biclustering or simultaneous clustering of both genes and conditions is challenging particularly for the analysis of high-dimensional gene expression data in information retrieval, knowledge discovery, and data mining. In biclustering of microarray data, several objectives have to be optimized simultaneously and often these objectives are in conflict with each other. A multi objective model is very suitable for solving this problem. Our method proposes a algorithm which is based on multi objective Simulated Annealing for discovering biclusters in gene expression data. Experimental result in bench mark data base present a significant improvement in overlap among biclusters and coverage of elements in gene expression and quality of biclusters.
Huq, Ashfia; Stephens, P W
2003-02-01
Recent advances in crystallographic computing and availability of high-resolution diffraction data have made it relatively easy to solve crystal structures from powders that would have traditionally required single crystal samples. The success of direct space methods depends heavily on starting with an accurate molecular model. In this paper we address the applicability of using these methods in finding subtleties such as disorder in the molecular conformation that might not be known a priori. We use ranitidine HCl as our test sample as it is known to have a conformational disorder from single crystal structural work. We redetermine the structure from powder data using simulated annealing and show that the conformational disorder is clearly revealed by this method.
Dawei Chen
2015-01-01
Full Text Available This paper analyzes the impact factors and principles of siting urban refueling stations and proposes a three-stage method. The main objective of the method is to minimize refueling vehicles’ detour time. The first stage aims at identifying the most frequently traveled road segments for siting refueling stations. The second stage focuses on adding additional refueling stations to serve vehicles whose demands are not directly satisfied by the refueling stations identified in the first stage. The last stage further adjusts and optimizes the refueling station plan generated by the first two stages. A genetic simulated annealing algorithm is proposed to solve the optimization problem in the second stage and the results are compared to those from the genetic algorithm. A case study is also conducted to demonstrate the effectiveness of the proposed method and algorithm. The results indicate the proposed method can provide practical and effective solutions that help planners and government agencies make informed refueling station location decisions.
Application of simulated annealing to solve multi-objectives for aggregate production planning
Atiya, Bayda; Bakheet, Abdul Jabbar Khudhur; Abbas, Iraq Tereq; Bakar, Mohd. Rizam Abu; Soon, Lee Lai; Monsi, Mansor Bin
2016-06-01
Aggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and requires minimal time.
Application of simulated annealing algorithm to improve work roll wear model in plate mills
无
2002-01-01
Employing Simulated Annealing Algorithm (SAA) and many measured data, a calculation model of work roll wear was built in the 2 800 mm 4-high mill of Wuhan Iron and Steel (Group) Co.(WISCO). The model was a semi-theory practical formula. Its pattern and magnitude were still hardly defined with classical optimization methods. But the problem could be resolved by SAA. It was pretty high precision to predict the values for the wear profiles of work roll in a rolling unit. Afterone-year application, the results show that the model is feasible in engineering, and it can be applied to predict the wear profiles of work roll in other mills
An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm
Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong
2017-03-01
Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.
Idoumghar, L. [Haute Alcace Univ., Mulhouse (France); Fodorean, D.; Mirraoui, A. [Univ. of Technology of Belfort-Montbeliard, Belfort (France). Dept. of Electrical Engineering and Control Systems
2010-03-09
Metaheuristics algorithms can solve complex optimization problems. A unique simulated annealing (SA) algorithm for multi-objective optimization was presented in this paper. The proposed SA algorithm was validated on five standard benchmark mathematical functions and improved the design of an inset permanent magnet motor with concentrated flux (IPMM-CF). The paper provided a description of the SA algorithm and discussed the results. The five benchmarks that were studied included Rastrigin's function; Rosenbrock's function; Michalewicz's function; Schwefel's function; and Noisy's function. The findings were also compared with results obtained by using the Ant Colony paradigm as well as with a particle swarm algorithm. Conclusions and further research options were also offered. It was concluded that the proposed approach has better performance in terms of accuracy, convergence rate, stability and robustness. 15 refs., 4 tabs., 9 figs.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Vasios C.E.
2003-01-01
Full Text Available In the present work, a new method for the classification of Event Related Potentials (ERPs is proposed. The proposed method consists of two modules: the feature extraction module and the classification module. The feature extraction module comprises the implementation of the Multivariate Autoregressive model in conjunction with the Simulated Annealing technique, for the selection of optimum features from ERPs. The classification module is implemented with a single three-layer neural network, trained with the back-propagation algorithm and classifies the data into two classes: patients and control subjects. The method, in the form of a Decision Support System (DSS, has been thoroughly tested to a number of patient data (OCD, FES, depressives and drug users, resulting successful classification up to 100%.
Simulated Annealing for Ground State Energy of Ionized Donor Bound Excitons in Semiconductors
YANHai-Qing; TANGChen; LIUMing; ZHANGHao; ZHANGGui-Min
2004-01-01
We present a global optimization method, called the simulated annealing, to the ground state energies of excitons. The proposed method does not require the partial derivatives with respect to each variational parameter or solving an eigenequation, so the present method is simpler in software programming than the variational method,and overcomes the major difficulties. The ground state energies of ionized-donor-bound excitons (D+,X) have beencal culated variationally for all values of effective electron-to-hole mass ratio σ. They are compared with those obtained by the variational method. The results obtained demonstrate that the proposed method is simple, accurate, and has more advantages than the traditional methods in calculation.
Simulated Annealing for Ground State Energy of Ionized Donor Bound Excitons in Semiconductors
YAN Hai-Qing; TANG Chen; LIU Ming; ZHANG Hao; ZHANG Gui-Min
2004-01-01
We present a global optimization method, called the simulated annealing, to the ground state energies of excitons. The proposed method does not require the partial derivatives with respect to each variational parameter or solving an eigenequation, so the present method is simpler in software programming than the variational method,and overcomes the major difficulties. The ground state energies of ionized-donor-bound excitons (D+, X) have been calculated variationally for all values of effective electron-to-hole mass ratio σ. They are compared with those obtained by the variational method. The results obtained demonstrate that the proposed method is simple, accurate, and has more advantages than the traditional methods in calculation.
Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi
The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.
Two-Dimensional IIR Filter Design Using Simulated Annealing Based Particle Swarm Optimization
Supriya Dhabal
2014-01-01
Full Text Available We present a novel hybrid algorithm based on particle swarm optimization (PSO and simulated annealing (SA for the design of two-dimensional recursive digital filters. The proposed method, known as SA-PSO, integrates the global search ability of PSO with the local search ability of SA and offsets the weakness of each other. The acceptance criterion of Metropolis is included in the basic algorithm of PSO to increase the swarm’s diversity by accepting sometimes weaker solutions also. The experimental results reveal that the performance of the optimal filter designed by the proposed SA-PSO method is improved. Further, the convergence behavior as well as optimization accuracy of proposed method has been improved significantly and computational time is also reduced. In addition, the proposed SA-PSO method also produces the best optimal solution with lower mean and variance which indicates that the algorithm can be used more efficiently in realizing two-dimensional digital filters.
Jingwei Song
2014-01-01
Full Text Available A simulated annealing (SA based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN, and partial least square support vector machine (PLS-SVM to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model, 12.93% (ANN, and 12.94% (PLS-SVM to 9.38%. Five-week average has been raised from 13.02% (chaotic model, 15.69% (ANN, and 15.92% (PLS-SVM to 11.27%.
Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.
2015-12-01
Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could
Statistical mechanics of Hamiltonian adaptive resolution simulations.
Español, P; Delgado-Buscalioni, R; Everaers, R; Potestio, R; Donadio, D; Kremer, K
2015-02-14
The Adaptive Resolution Scheme (AdResS) is a hybrid scheme that allows to treat a molecular system with different levels of resolution depending on the location of the molecules. The construction of a Hamiltonian based on the this idea (H-AdResS) allows one to formulate the usual tools of ensembles and statistical mechanics. We present a number of exact and approximate results that provide a statistical mechanics foundation for this simulation method. We also present simulation results that illustrate the theory.
Fully Adaptive Radar Modeling and Simulation Development
2017-04-01
Organization (NATO) Sensors Electronics Technology (SET)-227 Panel on Cognitive Radar. The FAR M&S architecture developed in Phase I allows for...Air Force’s previously developed radar M&S tools. This report is organized as follows. In Chapter 3, we provide an overview of the FAR framework...AFRL-RY-WP-TR-2017-0074 FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT Kristine L. Bell and Anthony Kellems Metron, Inc
Min Wang
2017-01-01
Full Text Available PFC2D(3D is commercial software, which is commonly used to model the crack initiation of rock and rock-like materials. For the PFC2D(3D numerical simulation, a proper set of microparameters need to be determined before the numerical simulation. To obtain a proper set of microparameters for PFC2D(3D model based on the macroparameters obtained from physical experiments, a novel technique has been carried out in this paper. The improved simulated annealing algorithm was employed to calibrate the microparameters of the numerical simulation model of PFC2D(3D. A Python script completely controls the calibration process, which can terminate automatically based on a termination criterion. The microparameter calibration process is not based on establishing the relationship between microparameters and macroparameters; instead, the microparameters are calibrated according to the improved simulated annealing algorithm. By using the proposed approach, the microparameters of both the contact-bond model and parallel-bond model in PFC2D(3D can be determined. To verify the validity of calibrating the microparameters of PFC2D(3D via the improved simulated annealing algorithm, some examples were selected from the literature. The corresponding numerical simulations were performed, and the numerical simulation results indicated that the proposed method is reliable for calibrating the microparameters of PFC2D(3D model.
Elemental thin film depth profiles by ion beam analysis using simulated annealing - a new tool
Jeynes, C [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom); Barradas, N P [Instituto Tecnologico e Nuclear, E.N. 10, Sacavem (Portugal); Marriott, P K [Department of Statistics, National University of Singapore, Singapore (Singapore); Boudreault, G [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom); Jenkin, M [School of Electronics Computing and Mathematics, University of Surrey, Guildford (United Kingdom); Wendler, E [Friedrich-Schiller-Universitaet Jena, Institut fuer Festkoerperphysik, Jena (Germany); Webb, R P [University of Surrey Ion Beam Centre, Guildford, GU2 7XH (United Kingdom)
2003-04-07
Rutherford backscattering spectrometry (RBS) and related techniques have long been used to determine the elemental depth profiles in films a few nanometres to a few microns thick. However, although obtaining spectra is very easy, solving the inverse problem of extracting the depth profiles from the spectra is not possible analytically except for special cases. It is because these special cases include important classes of samples, and because skilled analysts are adept at extracting useful qualitative information from the data, that ion beam analysis is still an important technique. We have recently solved this inverse problem using the simulated annealing algorithm. We have implemented the solution in the 'IBA DataFurnace' code, which has been developed into a very versatile and general new software tool that analysts can now use to rapidly extract quantitative accurate depth profiles from real samples on an industrial scale. We review the features, applicability and validation of this new code together with other approaches to handling IBA (ion beam analysis) data, with particular attention being given to determining both the absolute accuracy of the depth profiles and statistically accurate error estimates. We include examples of analyses using RBS, non-Rutherford elastic scattering, elastic recoil detection and non-resonant nuclear reactions. High depth resolution and the use of multiple techniques simultaneously are both discussed. There is usually systematic ambiguity in IBA data and Butler's example of ambiguity (1990 Nucl. Instrum. Methods B 45 160-5) is reanalysed. Analyses are shown: of evaporated, sputtered, oxidized, ion implanted, ion beam mixed and annealed materials; of semiconductors, optical and magnetic multilayers, superconductors, tribological films and metals; and of oxides on Si, mixed metal silicides, boron nitride, GaN, SiC, mixed metal oxides, YBCO and polymers. (topical review)
Lutsyshyn, Yaroslav
2016-01-01
We developed a CUDA-based parallelization of the annealing method for the inverse Laplace transform problem. The algorithm is based on annealing algorithm and minimizes residue of the reconstruction of the spectral function. We introduce local updates which preserve first two sum rules and allow an efficient parallel CUDA implementation. Annealing is performed with the Monte Carlo method on a population of Markov walkers. We propose imprinted branching method to improve further the convergence of the anneal. The algorithm is tested on truncated double-peak Lorentzian spectrum with examples of how the error in the input data affects the reconstruction.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Adaptive resolution simulation of liquid water
Praprotnik, Matej [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Matysiak, Silvina [Department of Chemistry, Rice University, 6100 Main Street, Houston, TX 77005 (United States); Delle Site, Luigi [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Kremer, Kurt [Max-Planck-Institut fuer Polymerforschung, Ackermannweg 10, D-55128 Mainz (Germany); Clementi, Cecilia [Department of Chemistry, Rice University, 6100 Main Street, Houston, TX 7700 (United States)
2007-07-25
Water plays a central role in biological systems and processes, and is equally relevant in a large range of industrial and technological applications. Being the most important natural solvent, its presence uniquely influences biological function as well as technical processes. Because of their importance, aqueous solutions are among the most experimentally and theoretically studied systems. However, many questions still remain open. Both experiments and theoretical models are usually restricted to specific cases. In particular all-atom simulations of biomolecules and materials in water are computationally very expensive and often not possible, mainly due to the computational effort to obtain water-water interactions in regions not relevant for the problem under consideration. In this paper we present a coarse-grained model that can reproduce the behaviour of liquid water at a standard temperature and pressure remarkably well. The model is then used in a multiscale simulation of liquid water, where a spatially adaptive molecular resolution procedure allows one to change from a coarse-grained to an all-atom representation on-the-fly. We show that this approach leads to the correct description of essential thermodynamic and structural properties of liquid water. Our adaptive multiscale scheme allows for significantly greater extensive simulations than existing approaches by taking explicit water into account only in the regions where the atomistic details are physically relevant. (fast track communication)
Hamiltonian adaptive resolution simulation for molecular liquids.
Potestio, Raffaello; Fritsch, Sebastian; Español, Pep; Delgado-Buscalioni, Rafael; Kremer, Kurt; Everaers, Ralf; Donadio, Davide
2013-03-08
Adaptive resolution schemes allow the simulation of a molecular fluid treating simultaneously different subregions of the system at different levels of resolution. In this work we present a new scheme formulated in terms of a global Hamiltonian. Within this approach equilibrium states corresponding to well-defined statistical ensembles can be generated making use of all standard molecular dynamics or Monte Carlo methods. Models at different resolutions can thus be coupled, and thermodynamic equilibrium can be modulated keeping each region at desired pressure or density without disrupting the Hamiltonian framework.
Chiapetto, M. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium); Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Becquart, C.S. [Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Domain, C. [EDF R and D, Departement Materiaux et Mecanique des Composants, Les Renardieres, Moret sur Loing (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium)
2015-01-01
Post-irradiation annealing experiments are often used to obtain clearer information on the nature of defects produced by irradiation. However, their interpretation is not always straightforward without the support of physical models. We apply here a physically-based set of parameters for object kinetic Monte Carlo (OKMC) simulations of the nanostructural evolution of FeMnNi alloys under irradiation to the simulation of their post-irradiation isochronal annealing, from 290 to 600 C. The model adopts a ''grey alloy'' scheme, i.e. the solute atoms are not introduced explicitly, only their effect on the properties of point-defect clusters is. Namely, it is assumed that both vacancy and SIA clusters are significantly slowed down by the solutes. The slowing down increases with size until the clusters become immobile. Specifically, the slowing down of SIA clusters by Mn and Ni can be justified in terms of the interaction between these atoms and crowdions in Fe. The results of the model compare quantitatively well with post-irradiation isochronal annealing experimental data, providing clear insight into the mechanisms that determine the disappearance or re-arrangement of defects as functions of annealing time and temperature. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Helio Yochihiro Fuchigami
2014-08-01
Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.
Ümmühan Başaran Filik
2010-01-01
Full Text Available This paper presents the solving unit commitment (UC problem using Modified Subgradient Method (MSG method combined with Simulated Annealing (SA algorithm. UC problem is one of the important power system engineering hard-solving problems. The Lagrangian relaxation (LR based methods are commonly used to solve the UC problem. The main disadvantage of this group of methods is the difference between the dual and the primal solution which gives some significant problems on the quality of the feasible solution. In this paper, MSG method which does not require any convexity and differentiability assumptions is used for solving the UC problem. MSG method depending on the initial value reaches zero duality gap. SA algorithm is used in order to assign the appropriate initial value for MSG method. The major advantage of the proposed approach is that it guarantees the zero duality gap independently from the size of the problem. In order to show the advantages of this proposed approach, the four-unit Tuncbilek thermal plant and ten-unit thermal plant which is usually used in literature are chosen as test systems. Penalty function (PF method is also used to compare with our proposed method in terms of total cost and UC schedule.
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
A. Mateos
2016-01-01
Full Text Available Technological advances are required to accommodate air traffic control systems for the future growth of air traffic. Particularly, detection and resolution of conflicts between aircrafts is a problem that has attracted much attention in the last decade becoming vital to improve the safety standards in free flight unstructured environments. We propose using the archive simulated annealing-based multiobjective optimization algorithm to deal with such a problem, accounting for three admissible maneuvers (velocity, turn, and altitude changes in a multiobjective context. The minimization of the maneuver number and magnitude, time delays, or deviations in the leaving points are considered for analysis. The optimal values for the algorithm parameter set are identified in the more complex instance in which all aircrafts have conflicts between each other accounting for 5, 10, and 20 aircrafts. Moreover, the performance of the proposed approach is analyzed by means of a comparison with the Pareto front, computed using brute force for 5 aircrafts and the algorithm is also illustrated with a random instance with 20 aircrafts.
Simulated Annealing-Based Ant Colony Algorithm for Tugboat Scheduling Optimization
Qi Xu
2012-01-01
Full Text Available As the “first service station” for ships in the whole port logistics system, the tugboat operation system is one of the most important systems in port logistics. This paper formulated the tugboat scheduling problem as a multiprocessor task scheduling problem (MTSP after analyzing the characteristics of tugboat operation. The model considers factors of multianchorage bases, different operation modes, and three stages of operations (berthing/shifting-berth/unberthing. The objective is to minimize the total operation times for all tugboats in a port. A hybrid simulated annealing-based ant colony algorithm is proposed to solve the addressed problem. By the numerical experiments without the shifting-berth operation, the effectiveness was verified, and the fact that more effective sailing may be possible if tugboats return to the anchorage base timely was pointed out; by the experiments with the shifting-berth operation, one can see that the objective is most sensitive to the proportion of the shifting-berth operation, influenced slightly by the tugboat deployment scheme, and not sensitive to the handling operation times.
Optimasi Coverage SFN pada Pemancar TV Digital DVB-T2 dengan Metode Simulated Annealing
Adib Nur Ikhwan
2013-09-01
Full Text Available Siaran TV digital yang akan diterapkan di Indonesia pada awalnya menggunakan standar DVB-T (Digital Video Broadcasting-Terestrial yang kemudian pada tahun 2012 diganti menjadi DVB-T2 (Digital Video Broadcasting-Terestrial Second Generation. Oleh karena itu, penelitian-penelitian sebelumnya termasuk optimasi coverage TV digital sudah tidak relevan lagi. Coverage merupakan salah satu bagian yang penting dalam siaran TV digital. Pada tugas akhir ini, optimasi coverage SFN (Single Frequency network pada pemancar TV digital diterapkan dengan metode SA (Simulated Annealing. Metode SA berusaha mencari solusi dengan berpindah dari satu solusi ke solusi yang lain, dimana akan dipilih solusi yang mempunyai fungsi energy (fitness yang terkecil. Optimasi dengan metode SA ini dilakukan dengan mengubah-ubah posisi pemancar TV digital sehingga didapatkan posisi yang terbaik. Optimasinya menggunakan 10 cooling schedule dengan melakukan 2 kali tes, baik pada mode FFT 2K ataupun 4K. Hasil yang dicapai dari penelitian ini adalah daerah coverage SFN pada pemancar siaran TV digital DVB-T2 mengalami peningkatan coverage relatif terbaik rata-rata sebesar 2.348% pada cooling schedule 7.
Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing
Khanh Doan V.K.
2014-06-01
Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.
Simulated annealing (SA to vehicle routing problems with soft time windows
Suphan Sodsoon
2014-12-01
Full Text Available The researcher has applied and develops the meta-heuristics method to solve Vehicle Routing Problems with Soft Time Windows (VRPSTW. For this case there was only one depot, multi customers which each generally sparse either or demand was different though perceived number of demand and specific period of time to receive them. The Operation Research was representative combinatorial optimization problems and is known to be NP-hard. In this research algorithm, use Simulated Annealing (SA to determine the optimum solutions which rapidly time solving. After developed the algorithms, apply them to examine the factors and the optimum extended time windows and test these factors with vehicle problem routing under specific time windows by Solomon in OR-Library in case of maximum 25 customers. Meanwhile, 6 problems are including of C101, C102, R101, R102, RC101 and RC102 respectively. The result shows the optimum extended time windows at level of 50%. At last, after comparison these answers with the case of vehicle problem routing under specific time windows and flexible time windows, found that percentage errors on number of vehicles approximately by -28.57% and percentage errors on distances approximately by -28.57% which this algorithm spent average processing time on 45.5 sec/problems.
Kang, Jiyoung; Yamasaki, Kazuhiko; Sano, Kuniaki; Tsutsui, Ken; Tsutsui, Kimiko M.; Tateno, Masaru
2017-01-01
Theoretical analyses of multivariate data have become increasingly important in various scientific disciplines. The multivariate curve resolution alternating least-squares (MCR-ALS) method is an integrated and systematic tool to decompose such various types of spectral data to several pure spectra, corresponding to distinct species. However, in the present study, the MCR-ALS calculation provided only unreasonable solutions, when used to process the circular dichroism spectra of double-stranded DNA (228 bp) in the complex with a DNA-binding peptide under various concentrations. To resolve this problem, we developed an algorithm by including a simulated annealing (SA) protocol (the SA-MCR-ALS method), to facilitate the expansion of the sampling space. The analysis successfully decomposed the aforementioned data into three reasonable pure spectra. Thus, our SA-MCR-ALS scheme provides a useful tool for effective extended sampling, to investigate the substantial and detailed properties of various forms of multivariate data with significant difficulties in the degrees of freedom.
Qin, Jin; Xiang, Hui; Ye, Yong; Ni, Linglin
2015-01-01
A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others.
Ghosh, P; Bagchi, M C
2009-01-01
With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.
魏关锋; 姚平经; LUOXing; ROETZELWilfried
2004-01-01
The multi-stream heat exchanger network synthesis (HENS) problem can be formulated as a mixed integer nonlinear programming model according to Yee et al. Its nonconvexity nature leads to existence of more than one optimum and computational difficulty for traditional algorithms to find the global optimum. Compared with deterministic algorithms, evolutionary computation provides a promising approach to tackle this problem. In this paper, a mathematical model of multi-stream heat exchangers network synthesis problem is setup. Different from the assumption of isothermal mixing of stream splits and thus linearity constraints of Yee et al., non-isothermal mixing is supported. As a consequence, nonlinear constraints are resulted and nonconvexity of the objective function is added. To solve the mathematical model, an algorithm named GA/SA (parallel genetic/simulated annealing algorithm) is detailed for application to the multi-stream heat exchanger network synthesis problem. The performance of the proposed approach is demonstrated with three examples and the obtained solutions indicate the presented approach is effective for multi-stream HENS.
Finding a Hadamard Matrix by Simulated Annealing of Spin-Vectors
Suksmono, Andriyan Bayu
2016-01-01
Reformulation of a combinatorial problem into optimization of a statistical-mechanics system, enables finding a better solution using heuristics derived from a physical process, such as by the SA (Simulated Annealing). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into an SH (Semi-normalized Hadamard) matrix; whose first columns are unity vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin-vectors to represent the SH vectors, which play the role of the spins on the Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin-vectors. Started from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+,-} spin-pair within the SH-spin vectors whi...
Masoud Rabbani
2016-02-01
Full Text Available This paper presents the capacitated Windy Rural Postman Problem with several vehicles. For this problem, two objectives are considered. One of them is the minimization of the total cost of all vehicle routes expressed by the sum of the total traversing cost and another one is reduction of the maximum cost of vehicle route in order to find a set of equitable tours for the vehicles. Mathematical formulation is provided. The multi-objective simulated annealing (MOSA algorithm has been modified for solving this bi-objective NP-hard problem. To increase algorithm performance, Taguchi technique is applied to design experiments for tuning parameters of the algorithm. Numerical experiments are proposed to show efficiency of the model. Finally, the results of the MOSA have been compared with MOCS (multi-objective Cuckoo Search algorithm to validate the performance of the proposed algorithm. The experimental results indicate that the proposed algorithm provides good solutions and performs significantly better than the MOCS.
张火明; 黄赛花; 管卫兵
2014-01-01
The highest similarity degree of static characteristics including both horizontal and vertical restoring force-displacement characteristics of total mooring system, as well as the tension-displacement characteristics of the representative single mooring line between the truncated and full depth system are obtained by annealing simulation algorithm for hybrid discrete variables (ASFHDV, in short). A“baton” optimization approach is proposed by utilizing ASFHDV. After each baton of optimization, if a few dimensional variables reach the upper or lower limit, the boundary of certain dimensional variables shall be expanded. In consideration of the experimental requirements, the length of the upper mooring line should not be smaller than 8 m, and the diameter of the anchor chain on the bottom should be larger than 0.03 m. A 100000 t turret mooring FPSO in the water depth of 304 m, with the truncated water depth being 76 m, is taken as an example of equivalent water depth truncated mooring system optimal design and calculation, and is performed to obtain the conformation parameters of the truncated mooring system. The numerical results indicate that the present truncated mooring system design is successful and effective.
An archived multi-objective simulated annealing for a dynamic cellular manufacturing system
Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza
2014-05-01
To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.
Back-Analysis of Tunnel Response from Field Monitoring Using Simulated Annealing
Vardakos, Sotirios; Gutierrez, Marte; Xia, Caichu
2016-12-01
This paper deals with the use of field monitoring data to improve predictions of tunnel response during and after construction from numerical models. Computational models are powerful tools for the performance-based engineering analysis and design of geotechnical structures; however, the main challenge to their use is the paucity of information to establish input data needed to yield reliable predictions that can be used in the design of geotechnical structures. Field monitoring can offer not only the means to verify modeling results but also faster and more reliable ways to determine model parameters and for improving the reliability of model predictions. Back-analysis involves the determination of parameters required in computational models using field-monitored data, and is particularly suited to underground constructions, where more information about ground conditions and response becomes available as the construction progresses. A crucial component of back-analysis is an algorithm to find a set of input parameters that will minimize the difference between predicted and measured performance (e.g., in terms of deformations, stresses, or tunnel support loads). Methods of back-analysis can be broadly classified as direct and gradient-based optimization techniques. An alternative methodology to carry out the nonlinear optimization involved in back-analyses is the use of heuristic techniques. Heuristic methods refer to experience-based techniques for problem-solving, learning, and discovery that find a solution which is not guaranteed to be fully optimal, but good enough for a given set of goals. This paper focuses on the use of the heuristic simulated annealing (SA) method in the back-analysis of tunnel responses from field-monitored data. SA emulates the metallurgical processing of metals such as steel by annealing, which involves a gradual and sufficiently slow cooling of a metal from the heated phase which leads to a final material with a minimum imperfections
Simulating Astronomical Adaptive Optics Systems Using Yao
Rigaut, François; Van Dam, Marcos
2013-12-01
Adaptive Optics systems are at the heart of the coming Extremely Large Telescopes generation. Given the importance, complexity and required advances of these systems, being able to simulate them faithfully is key to their success, and thus to the success of the ELTs. The type of systems envisioned to be built for the ELTs cover most of the AO breeds, from NGS AO to multiple guide star Ground Layer, Laser Tomography and Multi-Conjugate AO systems, with typically a few thousand actuators. This represents a large step up from the current generation of AO systems, and accordingly a challenge for existing AO simulation packages. This is especially true as, in the past years, computer power has not been following Moore's law in its most common understanding; CPU clocks are hovering at about 3GHz. Although the use of super computers is a possible solution to run these simulations, being able to use smaller machines has obvious advantages: cost, access, environmental issues. By using optimised code in an already proven AO simulation platform, we were able to run complex ELT AO simulations on very modest machines, including laptops. The platform is YAO. In this paper, we describe YAO, its architecture, its capabilities, the ELT-specific challenges and optimisations, and finally its performance. As an example, execution speed ranges from 5 iterations per second for a 6 LGS 60x60 subapertures Shack-Hartmann Wavefront sensor Laser Tomography AO system (including full physical image formation and detector characteristics) up to over 30 iterations/s for a single NGS AO system.
Afanasiev, M.; Pratt, R. G.; Kamei, R.; McDowell, G.
2012-12-01
Crosshole seismic tomography has been used by Vale to provide geophysical images of mineralized massive sulfides in the Eastern Deeps deposit at Voisey's Bay, Labrador, Canada. To date, these data have been processed using traveltime tomography, and we seek to improve the resolution of these images by applying acoustic Waveform Tomography. Due to the computational cost of acoustic waveform modelling, local descent algorithms are employed in Waveform Tomography; due to non-linearity an initial model is required which predicts first-arrival traveltimes to within a half-cycle of the lowest frequency used. Because seismic velocity anisotropy can be significant in hardrock settings, the initial model must quantify the anisotropy in order to meet the half-cycle criterion. In our case study, significant velocity contrasts between the target massive sulfides and the surrounding country rock led to difficulties in generating an accurate anisotropy model through traveltime tomography, and our starting model for Waveform Tomography failed the half-cycle criterion at large offsets. We formulate a new, semi-global approach for finding the best-fit 1-D elliptical anisotropy model using simulated annealing. Through random perturbations to Thompson's ɛ parameter, we explore the L2 norm of the frequency-domain phase residuals in the space of potential anisotropy models: If a perturbation decreases the residuals, it is always accepted, but if a perturbation increases the residuals, it is accepted with the probability P = exp(-(Ei-E)/T). This is the Metropolis criterion, where Ei is the value of the residuals at the current iteration, E is the value of the residuals for the previously accepted model, and T is a probability control parameter, which is decreased over the course of the simulation via a preselected cooling schedule. Convergence to the global minimum of the residuals is guaranteed only for infinitely slow cooling, but in practice good results are obtained from a variety
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Jun Wang
2015-01-01
Full Text Available The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
W. Ismail; Hassan, R.; PAYNE, A.; Swift, S
2011-01-01
This paper was delivered at AIME 2011: 13th Conference on Artifical Intelligence in Medicine. This paper presents a method for the detection and classification of blast cells in M3 with others sub-types using simulated annealing and neural networks. In this paper, we increased our test result from 10 images to 20 images. We performed Hill Climbing, Simulated Annealing and Genetic Algorithms for detecting the blast cells. As a result, simulated annealing is the “best” heuristic search for d...
Adaptive Resolution Simulation in Equilibrium and Beyond
Wang, Han
2014-01-01
In this paper, we investigate the equilibrium statistical properties of both the force and potential interpolations of adaptive resolution simulation (AdResS) under the theoretical framework of grand-canonical like AdResS (GC-AdResS). The thermodynamic relations between the higher and lower resolutions are derived by considering the absence of fundamental conservation laws in mechanics for both branches of AdResS. In order to investigate the applicability of AdResS method in studying the properties beyond the equilibrium, we demonstrate the accuracy of AdResS in computing the dynamical properties in two numerical examples: The velocity auto-correlation of pure water and the conformational relaxation of alanine dipeptide dissolved in water. Theoretical and technical open questions of the AdResS method are discussed in the end of the paper.
Automated integration of genomic physical mapping data via parallel simulated annealing
Slezak, T.
1994-06-01
The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.
Maurer Till
2005-04-01
Full Text Available Abstract Background We have developed the program PERMOL for semi-automated homology modeling of proteins. It is based on restrained molecular dynamics using a simulated annealing protocol in torsion angle space. As main restraints defining the optimal local geometry of the structure weighted mean dihedral angles and their standard deviations are used which are calculated with an algorithm described earlier by Döker et al. (1999, BBRC, 257, 348–350. The overall long-range contacts are established via a small number of distance restraints between atoms involved in hydrogen bonds and backbone atoms of conserved residues. Employing the restraints generated by PERMOL three-dimensional structures are obtained using standard molecular dynamics programs such as DYANA or CNS. Results To test this modeling approach it has been used for predicting the structure of the histidine-containing phosphocarrier protein HPr from E. coli and the structure of the human peroxisome proliferator activated receptor γ (Ppar γ. The divergence between the modeled HPr and the previously determined X-ray structure was comparable to the divergence between the X-ray structure and the published NMR structure. The modeled structure of Ppar γ was also very close to the previously solved X-ray structure with an RMSD of 0.262 nm for the backbone atoms. Conclusion In summary, we present a new method for homology modeling capable of producing high-quality structure models. An advantage of the method is that it can be used in combination with incomplete NMR data to obtain reasonable structure models in accordance with the experimental data.
Lee, Cheng-Kuang
2014-12-10
© 2014 American Chemical Society. The nanomorphologies of the bulk heterojunction (BHJ) layer of polymer solar cells are extremely sensitive to the electrode materials and thermal annealing conditions. In this work, the correlations of electrode materials, thermal annealing sequences, and resultant BHJ nanomorphological details of P3HT:PCBM BHJ polymer solar cell are studied by a series of large-scale, coarse-grained (CG) molecular simulations of system comprised of PEDOT:PSS/P3HT:PCBM/Al layers. Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link between morphology and processing conditions. Our analysis indicates that vertical phase segregation of P3HT:PCBM blend strongly depends on the electrode material and thermal annealing schedule. A thin P3HT-rich film is formed on the top, regardless of bottom electrode material, when the BHJ layer is exposed to the free surface during thermal annealing. In addition, preferential segregation of P3HT chains and PCBM molecules toward PEDOT:PSS and Al electrodes, respectively, is observed. Detailed morphology analysis indicated that, surprisingly, vertical phase segregation does not affect the connectivity of donor/acceptor domains with respective electrodes. However, the formation of P3HT/PCBM depletion zones next to the P3HT/PCBM-rich zones can be a potential bottleneck for electron/hole transport due to increase in transport pathway length. Analysis in terms of fraction of intra- and interchain charge transports revealed that processing schedule affects the average vertical orientation of polymer chains, which may be crucial for enhanced charge transport, nongeminate recombination, and charge collection. The present study establishes a more detailed link between processing and morphology by combining multiscale molecular
Chen, Hongwei; Kong, Xi; Chong, Bo; Qin, Gan; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng
2011-03-01
The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schrödinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.
Hasegawa, M.
2011-03-01
The aim of the present study is to elucidate how simulated annealing (SA) works in its finite-time implementation by starting from the verification of its conventional optimization scenario based on equilibrium statistical mechanics. Two and one supplementary experiments, the design of which is inspired by concepts and methods developed for studies on liquid and glass, are performed on two types of random traveling salesman problems. In the first experiment, a newly parameterized temperature schedule is introduced to simulate a quasistatic process along the scenario and a parametric study is conducted to investigate the optimization characteristics of this adaptive cooling. In the second experiment, the search trajectory of the Metropolis algorithm (constant-temperature SA) is analyzed in the landscape paradigm in the hope of drawing a precise physical analogy by comparison with the corresponding dynamics of glass-forming molecular systems. These two experiments indicate that the effectiveness of finite-time SA comes not from equilibrium sampling at low temperature but from downward interbasin dynamics occurring before equilibrium. These dynamics work most effectively at an intermediate temperature varying with the total search time and thus this effective temperature is identified using the Deborah number. To test directly the role of these relaxation dynamics in the process of cooling, a supplementary experiment is performed using another parameterized temperature schedule with a piecewise variable cooling rate and the effect of this biased cooling is examined systematically. The results show that the optimization performance is not only dependent on but also sensitive to cooling in the vicinity of the above effec-tive temperature and that this feature is interpreted as a consequence of the presence or absence of the workable interbasin dynamics. It is confirmed for the present instances that the effectiveness of finite-time SA derives from the glassy relaxation
Hao, Ge-Fei; Xu, Wei-Fang; Yang, Sheng-Gang; Yang, Guang-Fu
2015-10-23
Protein and peptide structure predictions are of paramount importance for understanding their functions, as well as the interactions with other molecules. However, the use of molecular simulation techniques to directly predict the peptide structure from the primary amino acid sequence is always hindered by the rough topology of the conformational space and the limited simulation time scale. We developed here a new strategy, named Multiple Simulated Annealing-Molecular Dynamics (MSA-MD) to identify the native states of a peptide and miniprotein. A cluster of near native structures could be obtained by using the MSA-MD method, which turned out to be significantly more efficient in reaching the native structure compared to continuous MD and conventional SA-MD simulation.
Hartmann, Matthias; Bogner, Ludwig
2008-05-01
Inverse treatment planning of intensity-modulated radiation therapy (IMRT) is complicated by several sources of error, which can cause deviations of optimized plans from the true optimal solution. These errors include the systematic and convergence error, the local minima error, and the optimizer convergence error. We minimize these errors by developing an inverse IMRT treatment planning system with a Monte Carlo based dose engine and a simulated annealing search engine as well as a deterministic search engine. In addition, different generalized equivalent uniform dose (gEUD)-based and hybrid objective functions were implemented and investigated with simulated annealing. By means of a head-and-neck IMRT case we have analyzed the properties of these gEUD-based objective functions, including its search space and the existence of local optima errors. We found evidence that the use of a previously published investigation of a gEUD-based objective function results in an uncommon search space with a golf hole structure. This special search space structure leads to trapping in local minima, making it extremely difficult to identify the true global minimum, even when using stochastic search engines. Moreover, for the same IMRT case several local optima have been detected by comparing the solutions of 100 different trials using a gradient optimization algorithm with the global optimum computed by simulated annealing. We have demonstrated that the hybrid objective function, which includes dose-based objectives for the target and gEUD-based objectives for normal tissue, results in equally good sparing of the critical structures as for the pure gEUD objective function and lower target dose maxima.
MALi-ming; JIANGHong; WANGXiao-chun
2004-01-01
The algorithm is divided into two steps. The first step pre-locates the blank by aligning its centre of gravity and approximate normal vector with those of destination surfaces, with largest overlap of projections of two objects on a plane perpendicular to the normal vector. The second step is optimizing an objective function by means of gradient-simulated annealing algorithm to get the best matching of a set of distributed points on the blank and destination surfaces. An example for machining hydroelectric turbine blades is given to verify the effectiveness of algorithm.
Óscar Begambre
2010-01-01
Full Text Available En este trabajo, el algoritmo Simulated Annealing (SA es empleado para solucionar el problema inverso de detección de daño en vigas usando información modal contaminada con ruido. La formulación de la función objetivo para el procedimiento de optimización, basado en el SA, está fundamentada en el método de la fuerza residual modificada. El desempeño del SA empleado en este estudio superó el de un algoritmo genético (AG en dos funciones de prueba reportadas en la literatura internacional. El procedimiento de evaluación de integridad aquí propuesto se confirmó y validó numéricamente empleando la teoría de vigas de Euler-Bernoulli y un Modelo de Elementos Finitos (MEF de vigas en voladizo y apoyadas libremente.In this research, the Simulated Annealing Algorithm (SA is employed to solve damage detection problems in beam type structures using noisy polluted modal data. The formulation of the objective function for the SA optimization procedure is based on the modified residual force method. The SA used in this research performs better than the Genetic Algorithm (GA in two difficult benchmark functions. The proposed structural damage-identification scheme is confirmed and assessed using a Finite Element Model (FEM of cantilever and a free-free Euler-Bernoulli beam model
Gomes, Mario Helder [Departamento de Engenharia Electrotecnica, Instituto Politecnico de Tomar, Quinta do Contador, Estrada da Serra, 2300 Tomar (Portugal); Saraiva, Joao Tome [INESC Porto, Faculdade de Engenharia, Universidade do Porto, Campus da FEUP, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)
2009-06-15
This paper describes an optimization model to be used by System Operators in order to validate the economic schedules obtained by Market Operators together with the injections from Bilateral Contracts. These studies will be performed off-line in the day before operation and the developed model is based on adjustment bids submitted by generators and loads and it is used by System Operators if that is necessary to enforce technical or security constraints. This model corresponds to an enhancement of an approach described in a previous paper and it now includes discrete components as transformer taps and reactor and capacitor banks. The resulting mixed integer formulation is solved using Simulated Annealing, a well known metaheuristic specially suited for combinatorial problems. Once the Simulated Annealing converges and the values of the discrete variables are fixed, the resulting non-linear continuous problem is solved using Sequential Linear Programming to get the final solution. The developed model corresponds to an AC version, it includes constraints related with the capability diagram of synchronous generators and variables allowing the computation of the active power required to balance active losses. Finally, the paper includes a Case Study based on the IEEE 118 bus system to illustrate the results that it is possible to obtain and their interest. (author)
Using the adaptive blockset for simulation and rapid prototyping
Ravn, Ole
1999-01-01
The paper presents the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The basics of indirect adaptive controllers are summarized. The concept behind the Adaptive Blockset for Simulink is to bridge...... the gap between simulation and prototype controller implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally fund in adaptive controllers...... is outlined. The block types are, identification, controller design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown....
Nakos, J.T.; Rosinski, S.T.; Acton, R.U.
1994-11-01
The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.
Nakos, J.T.; Rosinski, S.T.; Acton, R.U.
1994-11-01
The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.
Neuromuscular adaptation to actual and simulated weightlessness
Edgerton, V. R.; Roy, R. R.
1994-01-01
The chronic "unloading" of the neuromuscular system during spaceflight has detrimental functional and morphological effects. Changes in the metabolic and mechanical properties of the musculature can be attributed largely to the loss of muscle protein and the alteration in the relative proportion of the proteins in skeletal muscle, particularly in the muscles that have an antigravity function under normal loading conditions. These adaptations could result in decrements in the performance of routine or specialized motor tasks, both of which may be critical for survival in an altered gravitational field, i.e., during spaceflight and during return to 1 G. For example, the loss in extensor muscle mass requires a higher percentage of recruitment of the motor pools for any specific motor task. Thus, a faster rate of fatigue will occur in the activated muscles. These consequences emphasize the importance of developing techniques for minimizing muscle loss during spaceflight, at least in preparation for the return to 1 G after spaceflight. New insights into the complexity and the interactive elements that contribute to the neuromuscular adaptations to space have been gained from studies of the role of exercise and/or growth factors as countermeasures of atrophy. The present chapter illustrates the inevitable interactive effects of neural and muscular systems in adapting to space. It also describes the considerable progress that has been made toward the goal of minimizing the functional impact of the stimuli that induce the neuromuscular adaptations to space.
Simulation of Adaptive Kinetic Architectural Structures
Kirkegaard, Poul Henning
2010-01-01
This project deals with shape control of kinetic structures within the field of adaptable architecture. Here a variable geometry truss cantilever structure is analyzed using MATLAB/SIMULINK and the multibody dynamic software MSC Adams. Active shape control of a structure requires that the kinematic...
Spin-free quantum computational simulations and symmetry adapted states
Whitfield, James Daniel
2013-01-01
The ideas of digital simulation of quantum systems using a quantum computer parallel the original ideas of numerical simulation using a classical computer. In order for quantum computational simulations to advance to a competitive point, many techniques from classical simulations must be imported into the quantum domain. In this article, we consider the applications of symmetry in the context of quantum simulation. Building upon well established machinery, we propose a form of first quantized simulation that only requires the spatial part of the wave function, thereby allowing spin-free quantum computational simulations. We go further and discuss the preparation of N-body states with specified symmetries based on projection techniques. We consider two simple examples, molecular hydrogen and cyclopropenyl cation, to illustrate the ideas. While the methods here represent adaptations of known quantum algorithms, they are the first to explicitly deal with preparing N-body symmetry-adapted states.
Chaotic Simulated Annealing by A Neural Network Model with Transient Chaos
Chen, L; Chen, Luonan; Aihara, Kazuyuki
1997-01-01
We propose a neural network model with transient chaos, or a transiently chaotic neural network (TCNN) as an approximation method for combinatorial optimization problem, by introducing transiently chaotic dynamics into neural networks. Unlike conventional neural networks only with point attractors, the proposed neural network has richer and more flexible dynamics, so that it can be expected to have higher ability of searching for globally optimal or near-optimal solutions. A significant property of this model is that the chaotic neurodynamics is temporarily generated for searching and self-organizing, and eventually vanishes with autonomous decreasing of a bifurcation parameter corresponding to the "temperature" in usual annealing process. Therefore, the neural network gradually approaches, through the transient chaos, to dynamical structure similar to such conventional models as the Hopfield neural network which converges to a stable equilibrium point. Since the optimization process of the transiently chaoti...
Doddy Kastanya
2017-02-01
Full Text Available In any reactor physics analysis, the instantaneous power distribution in the core can be calculated when the actual bundle-wise burnup distribution is known. Considering the fact that CANDU (Canada Deuterium Uranium utilizes on-power refueling to compensate for the reduction of reactivity due to fuel burnup, in the CANDU fuel management analysis, snapshots of power and burnup distributions can be obtained by simulating and tracking the reactor operation over an extended period using various tools such as the *SIMULATE module of the Reactor Fueling Simulation Program (RFSP code. However, for some studies, such as an evaluation of a conceptual design of a next-generation CANDU reactor, the preferred approach to obtain a snapshot of the power distribution in the core is based on the patterned-channel-age model implemented in the *INSTANTAN module of the RFSP code. The objective of this approach is to obtain a representative snapshot of core conditions quickly. At present, such patterns could be generated by using a program called RANDIS, which is implemented within the *INSTANTAN module. In this work, we present an alternative approach to derive the patterned-channel-age model where a simulated-annealing-based algorithm is used to find such patterns, which produce reasonable power distributions.
Fabbri, Paolo; Trevisani, Sebastiano [Dipartimento di Geologia, Paleontologia e Geofisica, Universita degli Studi di Padova, via Giotto 1, 35127 Padova (Italy)
2005-10-01
The spatial distribution of groundwater temperatures in the low-temperature (60-86{sup o}C) geothermal Euganean field of northeastern Italy has been studied using a geostatistical approach. The data set consists of 186 temperatures measured in a fractured limestone reservoir, over an area of 8km{sup 2}. Investigation of the spatial continuity by means of variographic analysis revealed the presence of anisotropies that are apparently related to the particular geologic structure of the area. After inference of variogram models, a simulated annealing procedure was used to perform conditional simulations of temperature in the domain being studied. These simulations honor the data values and reproduce the spatial continuity inferred from the data. Post-processing of the simulations permits an assessment of temperature uncertainties. Maps of estimated temperatures, interquartile range, and of the probability of exceeding a prescribed 80{sup o}C threshold were also computed. The methodology described could prove useful when siting new wells in a geothermal area. (author)
ArF-excimer-laser annealing of 3C-SiC films—diode characteristics and numerical simulation
Mizunami, T.; Toyama, N.
2003-09-01
We fabricated Schottky barrier diodes using 3C-SiC films deposited on Si(1 1 1) by lamp-assisted thermal chemical vapor deposition and annealed with an ArF excimer laser. Improvement in both the reverse current and the ideality factor was obtained with 1-3 pulses with energy densities of 1.4- 1.6 J/cm2 per pulse. We solved a heat equation numerically assuming a transient liquid phase of SiC. The calculated threshold energy density for melting the surface was 0.9 J/cm2. The thermal effects of Si substrate on SiC film were also discussed. The experimental optimum condition was consistent the numerical simulation.
Satyajit Guha; Soumya Ganguly Neogi; Pinaki Chaudhury
2014-05-01
In this paper, we explore the use of stochastic optimizer, namely simulated annealing (SA) followed by density function theory (DFT)-based strategy for evaluating the structure and infrared spectroscopy of (H2O) OH− clusters where = 1-6. We have shown that the use of SA can generate both global and local structures of these cluster systems.We also perform a DFT calculation, using the optimized coordinate obtained from SA as input and extract the IR spectra of these systems. Finally, we compare our results with available theoretical and experimental data. There is a close correspondence between the computed frequencies from our theoretical study and available experimental data. To further aid in understanding the details of the hydrogen bonds formed, we performed atoms in molecules calculation on all the global minimum structures to evaluate relevant electron densities and critical points.
Shangchia Liu
2015-01-01
Full Text Available In the field of distributed decision making, different agents share a common processing resource, and each agent wants to minimize a cost function depending on its jobs only. These issues arise in different application contexts, including real-time systems, integrated service networks, industrial districts, and telecommunication systems. Motivated by its importance on practical applications, we consider two-agent scheduling on a single machine where the objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent. For solving the proposed problem, a branch-and-bound and three simulated annealing algorithms are developed for the optimal solution, respectively. In addition, the extensive computational experiments are also conducted to test the performance of the algorithms.
无
2006-01-01
The characteristics of the design resources in the ship collaborative design is described and the hierarchical model for the evaluation of the design resources is established. The comprehensive evaluation of the co-designers for the collaborative design resources has been done from different aspects using Analytic Hierarchy Process (AHP),and according to the evaluation results,the candidates are determined. Meanwhile,based on the principle of minimum cost,and starting from the relations between the design tasks and the corresponding co-designers,the optimizing selection model of the collaborators is established and one novel genetic combined with simulated annealing algorithm is proposed to realize the optimization. It overcomes the defects of the genetic algorithm which may lead to the premature convergence and local optimization if used individually. Through the application of this method in the ship collaborative design system,it proves the feasibility and provides a quantitative method for the optimizing selection of the design resources.
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Rincon, Luis [Universidad de Los Andes, Merida (Venezuela)
2001-03-01
Semiempirical simulated annealing molecular dynamics method using a fictitious Lagrangian has been developed for the study of structural and electronic properties of micro- and nano-clusters. As an application of the present scheme, we study the structure of Na{sub n} clusters in the range of n=2-100, and compared the present calculation with some ab-initio model calculation. [Spanish] Se desarrollo un metodo de Dinamica Molecular-Recocido simulado usando un Lagrangiano ficticio para estudiar las propiedades electronicas y estructurales de micro- y nano-agregados. Como una aplicacion del presente esquema, se estudio la estructura de agregados de Na{sub n} en el rango entre n=2-100, y se compararon los resultados con algunos calculos ab-initio modelo.
Ravn, Ole
1998-01-01
The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...
Ravn, Ole
1998-01-01
The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...... design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown....
Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique
2016-12-01
Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner-Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.
Adaptive LES Methodology for Turbulent Flow Simulations
Oleg V. Vasilyev
2008-06-12
Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic
温平川; 徐晓东; 何先刚
2003-01-01
This paper presents a highly hybrid Genetic Algorithm / Simulated Annealing algorithm. This algorithmhas been successfully implemented on Beowulf PCs Cluster and applied to a set of standard function optimization prob-lems. From experimental results, it is easily to see that this algorithm proposed by us is not only effective but also robust.
Adaptive Mesh Fluid Simulations on GPU
Wang, Peng; Kaehler, Ralf
2009-01-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA par...
PASSATA - Object oriented numerical simulation software for adaptive optics
Agapito, G; Esposito, S
2016-01-01
We present the last version of the PyrAmid Simulator Software for Adaptive opTics Arcetri (PASSATA), an IDL and CUDA based object oriented software developed in the Adaptive Optics group of the Arcetri observatory for Monte-Carlo end-to-end adaptive optics simulations. The original aim of this software was to evaluate the performance of a single conjugate adaptive optics system for ground based telescope with a pyramid wavefront sensor. After some years of development, the current version of PASSATA is able to simulate several adaptive optics systems: single conjugate, multi conjugate and ground layer, with Shack Hartmann and Pyramid wavefront sensors. It can simulate from 8m to 40m class telescopes, with diffraction limited and resolved sources at finite or infinite distance from the pupil. The main advantages of this software are the versatility given by the object oriented approach and the speed given by the CUDA implementation of the most computational demanding routines. We describe the software with its...
PASSATA: object oriented numerical simulation software for adaptive optics
Agapito, G.; Puglisi, A.; Esposito, S.
2016-07-01
We present the last version of the PyrAmid Simulator Software for Adaptive opTics Arcetri (PASSATA), an IDL and CUDA based object oriented software developed in the Adaptive Optics group of the Arcetri observatory for Monte-Carlo end-to-end adaptive optics simulations. The original aim of this software was to evaluate the performance of a single conjugate adaptive optics system for ground based telescope with a pyramid wavefront sensor. After some years of development, the current version of PASSATA is able to simulate several adaptive optics systems: single conjugate, multi conjugate and ground layer, with Shack Hartmann and Pyramid wavefront sensors. It can simulate from 8m to 40m class telescopes, with diffraction limited and resolved sources at finite or infinite distance from the pupil. The main advantages of this software are the versatility given by the object oriented approach and the speed given by the CUDA implementation of the most computational demanding routines. We describe the software with its last developments and present some examples of application.
Liu, Bin, E-mail: bins@ieee.org [School of Computer Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210023 (China)
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
模糊自适应混合退火粒子滤波算法%THE ALGORITHM OF FUZZY ADAPTIVE HYBRID ANNEALED PARTICLE FILTER
蒋东明
2013-01-01
A new particle filter algorithm is proposed based on the hybrid annealed particle filter (HAPF) for on-line estimation of non-Gaussian nonlinear systems and inherent degeneracy problem of the particle filter.In the filtering algorithm,according to the relation between the statistical properties of state noise and measurement noise of the system,we introduce an adjustment factor,then an annealed coefficient is produced by fuzzy inference system.The state parameters separation and the annealed coefficient are used to produce important probability density function.Using the algorithm,we get better annealed coefficient on the basis of keeping the advantages of HAPF.Simulation experiments show that the performance of the proposed filtering algorithm outperforms the HAPF.%针对非线性、非高斯系统状态的在线估计问题,及粒子滤波本身固有的退化问题,在已提出的混合退火粒子滤波算法的基础上提出一种新的粒子滤波算法.在滤波算法中,根据系统的状态噪声统计特性和量测噪声统计特性的关系引入调整因子,再由模糊推理系统产生退火系数.用状态参数分解和退火系数来产生重要性概率密度函数.在保留原算法优点的基础上取得了更佳的退火系数.仿真实验表明该粒子滤波器的性能优于混合退火粒子滤波算法.
Adaptive time steps in trajectory surface hopping simulations
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates
Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.
2016-03-01
We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.
Zhang, Jiapu
2013-01-01
Simulated annealing (SA) was inspired from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects, both are attributes of the material that depend on its thermodynamic free energy. In this Paper, firstly we will study SA in details on its practical implementation. Then, hybrid pure SA with local (or global) search optimization methods allows us to be able to design several effective and efficient global search optimization methods. In order to keep the original sense of SA, we clarify our understandings of SA in crystallography and molecular modeling field through the studies of prion amyloid fibrils.
Bello, A.; Laredo, E.; Grimau, M.
1999-11-01
The existence of a distribution of relaxation times has been widely used to describe the relaxation function versus frequency in glass-forming liquids. Several empirical distributions have been proposed and the usual method is to fit the experimental data to a model that assumes one of these functions. Another alternative is to extract from the experimental data the discrete profile of the distribution function that best fits the experimental curve without any a priori assumption. To test this approach a Monte Carlo algorithm using the simulated annealing is used to best fit simulated dielectric loss data, ɛ''(ω), generated with Cole-Cole, Cole-Davidson, Havriliak-Negami, and Kohlrausch-Williams-Watts (KWW) functions. The relaxation times distribution, G(ln(τ)), is obtained as an histogram that follows very closely the analytical expression for the distributions that are known in these cases. Also, the temporal decay functions, φ(t), are evaluated and compared to a stretched exponential. The method is then applied to experimental data for α-polyvinylidene fluoride over a temperature range 233 Kflouride (PVDF) is found to be 87, which characterizes this polymer as a relatively structurally strong material.
Using Adaptive Simulated Annealing to Estimate Ocean Bottom Acoustic Properties from Acoustic Data
2007-11-02
0.01 0.1 1 Iterations Speed Sedi•,• (ml/sec) Density Sed,•, {gin/uc) Altený Sedr ,p• (dIli/?L) 1i 1 - 1Y10-, IN 1 o 1 io . .. . " . . . . lxii" l lxi°ii...IxtO IxiO xdO , lxio) Ixi0o ixiO L 1xio .xl&O . IteatinsSpeedl SedT,,, {in/seý,) Density Sedr ,,, jglli/cý) Ate .Sed",ý, (dl]/?L) xlO IxxO lxio’ I-. I
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
The behavior of adaptive bone-remodeling simulation models
H.H. Weinans (Harrie); R. Huiskes (Rik); H.J. Grootenboer
1992-01-01
textabstractThe process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
The relative entropy is fundamental to adaptive resolution simulations
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
Adaptive resolution simulation of an atomistic protein in MARTINI water
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej
2014-01-01
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coa
Frühwirth, R; Vanlaer, Pascal
2007-01-01
Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described, using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.
Adaptive deployment of model reductions for tau-leaping simulation.
Wu, Sheng; Fu, Jin; Petzold, Linda R
2015-05-28
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
Adaptive deployment of model reductions for tau-leaping simulation
Wu, Sheng; Fu, Jin; Petzold, Linda R.
2015-05-01
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
Simulation for noise cancellation using LMS adaptive filter
Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung
2017-06-01
In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.
Felipe Baesler
2008-12-01
Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.
Silvia Gaona
2015-01-01
Full Text Available Censuses in Mexico are taken by the National Institute of Statistics and Geography (INEGI. In this paper a Two-Phase Approach (TPA to optimize the routes of INEGI’s census takers is presented. For each pollster, in the first phase, a route is produced by means of the Simulated Annealing (SA heuristic, which attempts to minimize the travel distance subject to particular constraints. Whenever the route is unrealizable, it is made realizable in the second phase by constructing a visibility graph for each obstacle and applying Dijkstra’s algorithm to determine the shortest path in this graph. A tuning methodology based on the irace package was used to determine the parameter values for TPA on a subset of 150 instances provided by INEGI. The practical effectiveness of TPA was assessed on another subset of 1962 instances, comparing its performance with that of the in-use heuristic (INEGIH. The results show that TPA clearly outperforms INEGIH. The average improvement is of 47.11%.
KAMAL DEEP; PARDEEP K SINGH
2016-09-01
In this paper, an integrated mathematical model of multi-period cell formation and part operation tradeoff in a dynamic cellular manufacturing system is proposed in consideration with multiple part process route. This paper puts emphasize on the production flexibility (production/subcontracting part operation) to satisfy the product demand requirement in different period segments of planning horizon considering production capacity shortage and/or sudden machine breakdown. The proposed model simultaneously generates machine cells and part families and selects the optimum process route instead of the user specifying predetermined routes. Conventional optimization method for the optimal cell formation problem requires substantial amount of time and memory space. Hence a simulated annealing based genetic algorithm is proposed to explore the solution regions efficiently and to expedite the solution search space. To evaluate the computability of the proposed algorithm, different problem scenarios are adopted from literature. The results approve the effectiveness of theproposed approach in designing the manufacturing cell and minimization of the overall cost, considering various manufacturing aspects such as production volume, multiple process route, production capacity, machine duplication, system reconfiguration, material handling and subcontracting part operation.
De-xuan ZOU; Gai-ge WANG; Gai PAN; Hong-wei QI
2016-01-01
Outline-free floorplanning focuses on area and wirelength reductions, which are usually meaningless, since they can hardly satisfy modern design requirements. We concentrate on a more difficult and useful issue, fixed-outline floorplanning. This issue imposes fixed-outline constraints on the outline-free floorplanning, making the physical design more interesting and chal-lenging. The contributions of this paper are primarily twofold. First, a modified simulated annealing (MSA) algorithm is proposed. In the beginning of the evolutionary process, a new attenuation formula is used to decrease the temperature slowly, to enhance MSA’s global searching capacity. After a period of time, the traditional attenuation formula is employed to decrease the temper-ature rapidly, to maintain MSA’s local searching capacity. Second, an excessive area model is designed to guide MSA to find feasible solutions readily. This can save much time for refining feasible solutions. Additionally, B*-tree representation is known as a very useful method for characterizing floorplanning. Therefore, it is employed to perform a perturbing operation for MSA. Finally, six groups of benchmark instances with different dead spaces and aspect ratios—circuits n10, n30, n50, n100, n200, and n300—are chosen to demonstrate the efficiency of our proposed method on fixed-outline floorplanning. Compared to several existing methods, the proposed method is more efficient in obtaining desirable objective function values associated with the chip area, wirelength, and fixed-outline constraints.
M. Madić
2013-09-01
Full Text Available This paper presents a systematic methodology for empirical modeling and optimization of surface roughness in nitrogen, CO2 laser cutting of stainless steel . The surface roughness prediction model was developed in terms of laser power , cutting speed , assist gas pressure and focus position by using The artificial neural network ( ANN . To cover a wider range of laser cutting parameters and obtain an experimental database for the ANN model development, Taguchi 's L27 orthogonal array was implemented in the experimental plan. The developed ANN model was expressed as an explicit nonlinear function , while the influence of laser cutting parameters and their interactions on surface roughness were analyzed by generating 2D and 3D plots . The final goal of the experimental study Focuses on the determinationof the optimum laser cutting parameters for the minimization of surface roughness . Since the solution space of the developed ANN model is complex, and the possibility of many local solutions is great, simulated annealing (SA was selected as a method for the optimization of surface roughness.
Hu, Kan-Nian; Qiang, Wei; Tycko, Robert
2011-01-01
We describe a general computational approach to site-specific resonance assignments in multidimensional NMR studies of uniformly 15N,13C-labeled biopolymers, based on a simple Monte Carlo/simulated annealing (MCSA) algorithm contained in the program MCASSIGN2. Input to MCASSIGN2 includes lists of multidimensional signals in the NMR spectra with their possible residue-type assignments (which need not be unique), the biopolymer sequence, and a table that describes the connections that relate one signal list to another. As output, MCASSIGN2 produces a high-scoring sequential assignment of the multidimensional signals, using a score function that rewards good connections (i.e., agreement between relevant sets of chemical shifts in different signal lists) and penalizes bad connections, unassigned signals, and assignment gaps. Examination of a set of high-scoring assignments from a large number of independent runs allows one to determine whether a unique assignment exists for the entire sequence or parts thereof. We demonstrate the MCSA algorithm using two-dimensional (2D) and three-dimensional (3D) solid state NMR spectra of several model protein samples (α-spectrin SH3 domain and protein G/B1 microcrystals, HET-s218–289 fibrils), obtained with magic-angle spinning and standard polarization transfer techniques. The MCSA algorithm and MCASSIGN2 program can accommodate arbitrary combinations of NMR spectra with arbitrary dimensionality, and can therefore be applied in many areas of solid state and solution NMR. PMID:21710190
Andrés Iglesias
2016-01-01
Full Text Available Fitting curves to noisy data points is a difficult problem arising in many scientific and industrial domains. Although polynomial functions are usually applied to this task, there are many shapes that cannot be properly fitted by using this approach. In this paper, we tackle this issue by using rational Bézier curves. This is a very difficult problem that requires computing four different sets of unknowns (data parameters, poles, weights, and the curve degree strongly related to each other in a highly nonlinear way. This leads to a difficult continuous nonlinear optimization problem. In this paper, we propose two simulated annealing schemas (the all-in-one schema and the sequential schema to determine the data parameterization and the weights of the poles of the fitting curve. These schemas are combined with least-squares minimization and the Bayesian Information Criterion to calculate the poles and the optimal degree of the best fitting Bézier rational curve, respectively. We apply our methods to a benchmark of three carefully chosen examples of 2D and 3D noisy data points. Our experimental results show that this methodology (particularly, the sequential schema outperforms previous polynomial-based approaches for our data fitting problem, even in the presence of noise of low-medium intensity.
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Setyawan, Wahyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Nandipati, Giridhar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Univ. of Washington, Seattle, WA (United States); Heinisch, Howard L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Kurtz, Richard J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780^{×}_{Ed}, where _{Ed} = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250^{×}_{Ed}. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5_{Tm}) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.
Improved hybrid particle swarm algorithm based on simulated annealing%基于自适应模拟退火的改进混合粒子群算法
杨文光; 严哲; 隋丽丽
2015-01-01
为了改善旅行商(TSP)优化求解能力，对模拟退火与混合粒子群算法进行改进，引入了自适应寻优策略。交叉、变异的混合粒子群算法，易于陷入局部最优，而自适应的模拟退火算法可以跳出局部最优，进行全局寻优，所以两者的结合兼顾了全局和局部。该算法增加的自适应性寻优策略提供了判定粒子是否陷入局部极值的条件，并可借此以一定概率进行自适应寻优，增强了全局寻优能力。与混合粒子群算法实验结果对比，显示了本文算法的有效性。%In order to enhance the ability of solving TSP optimization, the hybrid particle swarm optimization (PSO) algorithm with simulated annealing is improved, which introduced the adaptive optimization strategy. Hybrid particle swarm optimization algorithm with crossover and mutation, is easy to fall into local optimum, and the simulated annealing algorithm can avoid local optimization, so the combination of both global and lo-cal. This algorithm increases the adaptive optimization strategy which provided to determine whether the parti-cles fall into local extreme conditions, and can be used to with a certain probability of adaptive optimization, enhanced the ability of global optimization. Compared with the hybrid particle swarm algorithm experimental results, shows the effectiveness of the proposed algorithm.
Adaptive image ray-tracing for astrophysical simulations
Parkin, E R
2010-01-01
A technique is presented for producing synthetic images from numerical simulations whereby the image resolution is adapted around prominent features. In so doing, adaptive image ray-tracing (AIR) improves the efficiency of a calculation by focusing computational effort where it is needed most. The results of test calculations show that a factor of >~ 4 speed-up, and a commensurate reduction in the number of pixels required in the final image, can be achieved compared to an equivalent calculation with a fixed resolution image.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Selection for autochthonous bifidobacteial isolates adapted to simulated gastrointestinal fluid
H Jamalifar
2010-03-01
Full Text Available "nBackground and the purpose of the study: Bifidobacterial strains are excessively sensitive to acidic conditions and this can affect their living ability in the stomach and fermented foods, and as a result, restrict their use as live probiotic cultures. The aim of the present study was to obtain bifidobacterial isolates with augmented tolerance to simulated gastrointestinal condition using cross-protection method. "nMethods: Individual bifidobacterial strains were treated in acidic environment and also in media containing bile salts and NaCl. Viability of the acid and acid-bile-NaCl tolerant isolates was further examined in simulated gastric and small intestine by subsequent incubation of the probiotic bacteria in the corresponding media for 120 min. Antipathogenic activities of the adapted isolates were compared with those of the original strains. "nResults and major conclusion: The acid and acid-bile-NaCl adapted isolates showed improved viabilities significantly (p<0.05 in simulated gastric fluid compared to their parent strains. The levels of reduction in bacterial count (Log cfu/ml of the acid and acid-bile-NaCl adapted isolates obtained in simulated gastric fluid ranged from 0.64-3.06 and 0.36-2.43 logarithmic units after 120 min of incubation. There was no significant difference between the viability of the acid-bile-NaCl-tolerant isolates and the original strains in simulated small intestinal condition except for Bifidobacterium adolescentis (p<0.05. The presence of 15 ml of supernatants of acid-bile-NaCl-adapted isolates and also those of the initial Bifidobacterium strains inhibited pathogenic bacterial growth for 24 hrs. Probiotic bacteria with improved ability to survive in harsh gastrointestinal environment could be obtained by subsequent treatment of the strains in acid, bile salts and NaCl environments.
Simulated Annealing Based Algorithm for Identifying Mutated Driver Pathways in Cancer
Hai-Tao Li
2014-01-01
Full Text Available With the development of next-generation DNA sequencing technologies, large-scale cancer genomics projects can be implemented to help researchers to identify driver genes, driver mutations, and driver pathways, which promote cancer proliferation in large numbers of cancer patients. Hence, one of the remaining challenges is to distinguish functional mutations vital for cancer development, and filter out the unfunctional and random “passenger mutations.” In this study, we introduce a modified method to solve the so-called maximum weight submatrix problem which is used to identify mutated driver pathways in cancer. The problem is based on two combinatorial properties, that is, coverage and exclusivity. Particularly, we enhance an integrative model which combines gene mutation and expression data. The experimental results on simulated data show that, compared with the other methods, our method is more efficient. Finally, we apply the proposed method on two real biological datasets. The results show that our proposed method is also applicable in real practice.
Larry W. Burggraf
2013-07-01
Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Widjaja, Effendi; Garland, Marc
2002-07-15
A combination of singular value decomposition, entropy minimization, and simulated annealing was applied to a synthetic 7-species spectroscopic data set with added white noise. The pure spectra were highly overlapping. Global minima for selected objective functions were obtained for the transformation of the first seven right singular vectors. Simple Shannon type entropy functions were used in the objective functions and realistic physical constraints were imposed in the penalties. It was found that good first approximations for the pure component spectra could be obtained without the use of any a priori information. The present method out performed the two widely used routines, namely Simplisma and OPA-ALS, as well as IPCA. These results indicate that a combination of SVD, entropy minimization, and simulated annealing is a potentially powerful tool for spectral reconstructions from large real experimental systems. Copyright 2002 Wiley Periodicals, Inc.
Min Dai
2013-01-01
Full Text Available A flexible flow-shop scheduling (FFS with nonidentical parallel machines for minimizing the maximum completion time or makespan is a well-known combinational problem. Since the problem is known to be strongly NP-hard, optimization can either be the subject of optimization approaches or be implemented for some approximated cases. In this paper, an improved genetic-simulated annealing algorithm (IGAA, which combines genetic algorithm (GA based on an encoding matrix with simulated annealing algorithm (SAA based on a hormone modulation mechanism, is proposed to achieve the optimal or near-optimal solution. The novel hybrid algorithm tries to overcome the local optimum and further to explore the solution space. To evaluate the performance of IGAA, computational experiments are conducted and compared with results generated by different algorithms. Experimental results clearly demonstrate that the improved metaheuristic algorithm performs considerably well in terms of solution quality, and it outperforms several other algorithms.
Adaptive quantum computation in changing environments using projective simulation
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-08-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks.
Disaster Rescue Simulation based on Complex Adaptive Theory
Feng Jiang
2013-05-01
Full Text Available Disaster rescue is one of the key measures of disaster reduction. The rescue process is a complex process with the characteristics of large scale, complicate structure, non-linear. It is hard to describe and analyze them with traditional methods. Based on complex adaptive theory, this paper analyzes the complex adaptation of the rescue process from seven features: aggregation, nonlinearity, mobility, diversity, tagging, internal model and building block. With the support of Repast platform, an agent-based model including rescue agents and victim agents was proposed. Moreover, two simulations with different parameters are employed to examine the feasibility of the model. As a result, the proposed model has been shown that it is efficient in dealing with the disaster rescue simulation and can provide the reference for making decisions.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States); University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4 (Canada); Robertson, Douglas D., E-mail: douglas.d.robertson@emory.edu [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States)
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than
Adaptive implicit method for thermal compositional reservoir simulation
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States)
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Moment Preserving Adaptive Particle Weighting Scheme for PIC Simulations
2012-07-01
Analytical Solution for Density, n(x, t) Crank-Nicolson Particle Simulations C-N is Stable and Non -Dissipative for Re(λ)=0 φ x av T E = T+φ = const. JEAN...Reproduces 3-4 Orders of Magnitude Random Merge -> Thermalization 3000 First Point, 1500 First Cross Bi- Maxwellian Specifically Difficult Octree Merge...3000 First Point, 1500 First Cross Bi- Maxwellian Specifically Difficult Octree Merge Significantly Better Merge & Split Adapts Particle Count Despite
Adaptive resolution simulation of an atomistic protein in MARTINI water
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej
2014-02-01
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Hydrodynamical Adaptive Mesh Refinement Simulations of Disk Galaxies
Gibson, Brad K; Sanchez-Blazquez, Patricia; Teyssier, Romain; House, Elisa L; Brook, Chris B; Kawata, Daisuke
2008-01-01
To date, fully cosmological hydrodynamic disk simulations to redshift zero have only been undertaken with particle-based codes, such as GADGET, Gasoline, or GCD+. In light of the (supposed) limitations of traditional implementations of smoothed particle hydrodynamics (SPH), or at the very least, their respective idiosyncrasies, it is important to explore complementary approaches to the SPH paradigm to galaxy formation. We present the first high-resolution cosmological disk simulations to redshift zero using an adaptive mesh refinement (AMR)-based hydrodynamical code, in this case, RAMSES. We analyse the temporal and spatial evolution of the simulated stellar disks' vertical heating, velocity ellipsoids, stellar populations, vertical and radial abundance gradients (gas and stars), assembly/infall histories, warps/lopsideness, disk edges/truncations (gas and stars), ISM physics implementations, and compare and contrast these properties with our sample of cosmological SPH disks, generated with GCD+. These prelim...
LIANG WEN-XI; ZHANG JING-JUAN; L(U) JUN-FENG; LIAO RUI
2001-01-01
We have designed a spatially quantized diffractive optical element (DOE) for controlling the beam profile in a three-dimensional space with the help of the simulated annealing (SA) algorithm. In this paper, we investigate the annealing schedule and the neighbourhood which are the deterministic parameters of the process that warrant the quality of the SA algorithm. The algorithm is employed to solve the discrete stochastic optimization problem of the design of a DOE. The objective function which constrains the optimization is also studied. The computed results demonstrate that the procedure of the algorithm converges stably to an optimal solution close to the global optimum with an acceptable computing time. The results meet the design requirement well and are applicable.
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms
Bosl, W J
2005-01-26
The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis
Adaptive Techniques for Clustered N-Body Cosmological Simulations
Menon, Harshitha; Zheng, Gengbin; Jetley, Pritish; Kale, Laxmikant; Quinn, Thomas; Governato, Fabio
2014-01-01
ChaNGa is an N-body cosmology simulation application implemented using Charm++. In this paper, we present the parallel design of ChaNGa and address many challenges arising due to the high dynamic ranges of clustered datasets. We focus on optimizations based on adaptive techniques for scaling to more than 128K cores. We demonstrate strong scaling on up to 512K cores of Blue Waters evolving 12 and 24 billion particles. We also show strong scaling of highly clustered datasets on up to 128K cores.
吴坤鸿; 詹世贤
2016-01-01
根据火力打击规则，建立了多目标函数的目标分配模型，提出了分布式遗传模拟退火算法对模型进行求解。分布式遗传模拟退火算法基于经典遗传算法进行改进：将单目标串行搜索方式变成多目标分布式搜索方式，适用于多目标寻优问题求解；采用保留最优个体和轮盘赌相结合的方式进行个体选择，在交叉算子中引入模拟退火算法，使用自适应变异概率，较好地保持算法广度和深度搜索平衡。最后，通过仿真实验验证了算法的有效性和可靠性。%According to the rules of fire strike,a target assignment model is presented,and a Distributed Genetic Simulated Annealing algorithm (DGSA)is applied to resolve this model. DGSA is improved based on classic Genetic Algorithm (GA)as below:the single object serial-searched mode is changed to multiple objects distributed-searched mode,which is fitter for resolving multiobjective optimization; in order to keep a better balance between exploration and exploitation of algorithm,a method by coupling best one preservation and roulette wheel is established for individual selection,and simulated annealing algorithm is combined into crossover operation,and self -adaptive mutation probability is applied. Finally,the efficiency and reliability of DGSA is verified by simulation experiment.
Simulated annealing reveals the kinetic activity of SGLT1, a member of the LeuT structural family.
Longpré, Jean-Philippe; Sasseville, Louis J; Lapointe, Jean-Yves
2012-10-01
The Na(+)/glucose cotransporter (SGLT1) is the archetype of membrane proteins that use the electrochemical Na(+) gradient to drive uphill transport of a substrate. The crystal structure recently obtained for vSGLT strongly suggests that SGLT1 adopts the inverted repeat fold of the LeuT structural family for which several crystal structures are now available. What is largely missing is an accurate view of the rates at which SGLT1 transits between its different conformational states. In the present study, we used simulated annealing to analyze a large set of steady-state and pre-steady-state currents measured for human SGLT1 at different membrane potentials, and in the presence of different Na(+) and α-methyl-d-glucose (αMG) concentrations. The simplest kinetic model that could accurately reproduce the time course of the measured currents (down to the 2 ms time range) is a seven-state model (C(1) to C(7)) where the binding of the two Na(+) ions (C(4)→C(5)) is highly cooperative. In the forward direction (Na(+)/glucose influx), the model is characterized by two slow, electroneutral conformational changes (59 and 100 s(-1)) which represent reorientation of the free and of the fully loaded carrier between inside-facing and outside-facing conformations. From the inward-facing (C(1)) to the outward-facing Na-bound configuration (C(5)), 1.3 negative elementary charges are moved outward. Although extracellular glucose binding (C(5)→C(6)) is electroneutral, the next step (C(6)→C(7)) carries 0.7 positive charges inside the cell. Alignment of the seven-state model with a generalized model suggested by the structural data of the LeuT fold family suggests that electrogenic steps are associated with the movement of the so-called thin gates on each side of the substrate binding site. To our knowledge, this is the first model that can quantitatively describe the behavior of SGLT1 down to the 2 ms time domain. The model is highly symmetrical and in good agreement with the
Hydrodynamics in adaptive resolution particle simulations: Multiparticle collision dynamics
Alekseeva, Uliana, E-mail: Alekseeva@itc.rwth-aachen.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); German Research School for Simulation Sciences (GRS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Winkler, Roland G., E-mail: r.winkler@fz-juelich.de [Theoretical Soft Matter and Biophysics, Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Sutmann, Godehard, E-mail: g.sutmann@fz-juelich.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); ICAMS, Ruhr-University Bochum, D-44801 Bochum (Germany)
2016-06-01
A new adaptive resolution technique for particle-based multi-level simulations of fluids is presented. In the approach, the representation of fluid and solvent particles is changed on the fly between an atomistic and a coarse-grained description. The present approach is based on a hybrid coupling of the multiparticle collision dynamics (MPC) method and molecular dynamics (MD), thereby coupling stochastic and deterministic particle-based methods. Hydrodynamics is examined by calculating velocity and current correlation functions for various mixed and coupled systems. We demonstrate that hydrodynamic properties of the mixed fluid are conserved by a suitable coupling of the two particle methods, and that the simulation results agree well with theoretical expectations.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
王宏健; 王晶; 曲丽萍; 刘振业
2013-01-01
The FastSLAM algorithm based on variance reduction of particle weight was presented in order to solve the decrease of estimated accuracy of AUV ( autonomous underwater vehicle) , location due to particles degeneracy and the sample impoverishment as a result of resampling in standard FastSLAM. The variance of particle weight was decreased by generating an adaptive exponential fading factor, which came from the thought of cooling function in simulated annealing. The effective particle number was increased by application of FastSLAM based on simulated annealing variance reduction in navigation and localization of AUV. Resampling in standard FastSLAM was replaced with it. Establish the kinematic model of AUV, feature model and measurement models of sensors, and make feature extraction with Hough transform. The experiment of AUV's simultaneous localization and mapping u-sing simulated annealing variance reduction FastSLAM was based on trial data. The results indicate that the method described in this paper maintains the diversity of the particles, however, weakens the degeneracy, while at the same time enhances the accuracy stability of AUV's navigation and localization system.%由于标准FastSLAM中存在粒子退化及重采样引起的粒子贫化,导致自主水下航行器(AUV)位置估计精度严重下降的问题,提出了一种基于粒子权值方差缩减的FastSLAM算法.利用模拟退火的降温函数产生自适应指数渐消因子来降低粒子权值的方差,进而增加有效粒子数,以此取代标准FastSLAM中的重采样步骤.建立AUV的运动学模型、特征模型及传感器的测量模型,通过霍夫变换进行特征提取.利用方差缩减FastSLAM算法,基于海试数据进行了AUV同步定位与构图仿真试验,结果表明所提方法能够保证粒子的多样性,并且降低粒子的退化程度,提高了AUV定位与地图构建系统的准确性及稳定性.
Kumar Deepak
2015-12-01
Full Text Available Groundwater contamination due to leakage of gasoline is one of the several causes which affect the groundwater environment by polluting it. In the past few years, In-situ bioremediation has attracted researchers because of its ability to remediate the contaminant at its site with low cost of remediation. This paper proposed the use of a new hybrid algorithm to optimize a multi-objective function which includes the cost of remediation as the first objective and residual contaminant at the end of the remediation period as the second objective. The hybrid algorithm was formed by combining the methods of Differential Evolution, Genetic Algorithms and Simulated Annealing. Support Vector Machines (SVM was used as a virtual simulator for biodegradation of contaminants in the groundwater flow. The results obtained from the hybrid algorithm were compared with Differential Evolution (DE, Non Dominated Sorting Genetic Algorithm (NSGA II and Simulated Annealing (SA. It was found that the proposed hybrid algorithm was capable of providing the best solution. Fuzzy logic was used to find the best compromising solution and finally a pumping rate strategy for groundwater remediation was presented for the best compromising solution. The results show that the cost incurred for the best compromising solution is intermediate between the highest and lowest cost incurred for other non-dominated solutions.
Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations
Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc
2016-09-12
While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.
Simulation of random events for adaptive control systems calibration
Drăgoi Mircea Viorel
2017-01-01
Full Text Available The paper deals with a mathematical model that simulates the random occurrence of events during cutting processes by milling. The evolution of certain parameters that typify the cutting processes depends on predictable and non-predictable variables. In this context, either the material hardness that varies in different sides of billet, or cutting depth, can act as non-predictable variables. In order to design a response in terms of cutting parameters to non-predictable variations of inputs, a simulation of such phenomena is very useful. A mathematical model that generates random events, both in terms of non-uniform frequency and intensity is here described. A virtual instrument built in LabVIEW generates (pseudo random events based on a combination of random numbers, as the evolution of the simulated process to be much like a real one. Furthermore the user of virtual instrument can generate himself events at certain moments and of certain intensity. This can be a useful tool to study the algorithms of designing the response which should re-balance the process within adaptive control systems.
Salter, Bill Jean, Jr.
Purpose. The advent of new, so called IVth Generation, external beam radiation therapy treatment machines (e.g. Scanditronix' MM50 Racetrack Microtron) has raised the question of how the capabilities of these new machines might be exploited to produce extremely conformal dose distributions. Such machines possess the ability to produce electron energies as high as 50 MeV and, due to their scanned beam delivery of electron treatments, to modulate intensity and even energy, within a broad field. Materials and methods. Two patients with 'challenging' tumor geometries were selected from the patient archives of the Cancer Therapy and Research Center (CTRC), in San Antonio Texas. The treatment scheme that was tested allowed for twelve, energy and intensity modulated beams, equi-spaced about the patient-only intensity was modulated for the photon treatment. The elementary beams, incident from any of the twelve allowed directions, were assumed parallel, and the elementary electron beams were modeled by elementary beam data. The optimal arrangement of elementary beam energies and/or intensities was optimized by Szu-Hartley Fast Simulated Annealing Optimization. Optimized treatment plans were determined for each patient using both the high energy, intensity and energy modulated electron (HIEME) modality, and the 6 MV photon modality. The 'quality' of rival plans were scored using three different, popular objective functions which included Root Mean Square (RMS), Maximize Dose Subject to Dose and Volume Limitations (MDVL - Morrill et. al.), and Probability of Uncomplicated Tumor Control (PUTC) methods. The scores of the two optimized treatments (i.e. HIEME and intensity modulated photons) were compared to the score of the conventional plan with which the patient was actually treated. Results. The first patient evaluated presented a deeply located target volume, partially surrounding the spinal cord. A healthy right kidney was immediately adjacent to the tumor volume, separated
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin
2015-01-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined used for deriving and conditions for when and where to apply model reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5 - 50 times for model reduction level between 70 - 95 %.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.
2016-09-01
Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin; Wang, Da
2016-03-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined and used to derive conditions for when and where to apply reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5-50 times for model reduction level between 70-95 %.
Computational Simulation of Hypervelocity Penetration Using Adaptive SPH Method
QIANG Hongfu; MENG Lijun
2006-01-01
The normal hypervelocity impact of an Al-thin plate by an Al-sphere was numerically simulated by using the adaptive smoothed particle hydrodynamics (ASPH) method.In this method,the isotropic smoothing algorithm of standard SPH is replaced with anisotropic smoothing involving ellipsoidal kernels whose axes evolve automatically to follow the mean particle spacing as it varies in time,space,and direction around each particle.Using the ASPH,the anisotropic volume changes under strong shock condition are captured more accurately and clearly.The sophisticated features of meshless and Lagrangian nature inherent in the SPH method are kept for treating large deformations,large inhomogeneities and tracing free surfaces in the extremely transient impact process.A two-dimensional ASPH program is coded with C + +.The developed hydrocode is examined for example problems of hypervelocity impacts of solid materials.The results obtained from the numerical simulation are compared with available experimental ones.Good agreement is observed.
融合模拟退火策略的萤火虫优化算法%Glowworm swarm optimization algorithm merging simulated annealing strategy
曹秀爽
2014-01-01
Artificial glowworm swarm optimization algorithm is a new research orientation in the field of swarm intel igence recently.The algorithm has achieved success in the complex function optimization,but it is easy to fal into local optimum,and has the low speed of convergence in the later period and so on.Simulated annealing algorithm has excel ent global search ability.Combi-ning their advantages,an improved glowworm swarm optimization algorithm was proposed based on simulated annealing strategy.The simulated annealing strategy was integrated into the process of glowworm swarm optimization algorithm.And the temper strategy was integrated into the local search process of hybrid algorithm to improve search precision.Overal performance of the Glowworm swarm optimization was improved.Simulation results show that the hybrid algo-rithm increases the accuracy of solution and the speed of convergence significantly,and is a fea-sible and effective method.%萤火虫算法是群智能领域近年出现的一个新的研究方向，该算法虽已在复杂函数优化方面取得了成功，但也存在着易于陷入局部最优且进化后期收敛速度慢等问题，而模拟退火机制具有很强的全局搜索能力，结合两者的优缺点，提出一种融合模拟退火策略的萤火虫优化算法。改进后的算法在萤火虫算法全局搜索过程中融入模拟退火搜索机制，在局部搜索过程中采用了回火策略，改善寻优精度，改进了萤火虫算法的全局搜索性能和局部搜索性能。仿真实验结果表明：改进后的算法在收敛速度和解的精度方面有了显著地提高，证明了算法改进的可行性和有效性。
Simulating adaptive wood harvest in a changing climate
Yousefpour, Rasoul; Nabel, Julia; Pongratz, Julia
2016-04-01
The world's forest experience substantial carbon exchange fluxes between land and atmosphere. Large carbon sinks occur in response to changes in environmental conditions (such as climate change and increased atmospheric CO2 concentrations), removing about one quarter of current anthropogenic CO2-emissions. Large sinks also occur due to regrowth of forest on areas of agricultural abandonment or forest management. Forest management, on the other hand, also leads to substantial amounts of carbon being eventually released to the atmosphere. Both sinks and sources attributable to forests are therefore dependent on the intensity of management. Forest management in turn depends on the availability of resources, which is influenced by environmental conditions and sustainability of management systems applied. Estimating future carbon fluxes therefore requires accounting for the interaction of environmental conditions, forest growth, and management. However, this interaction is not fully captured by current modeling approaches: Earth system models depict in detail interactions between climate, the carbon cycle, and vegetation growth, but use prescribed information on management. Resource needs and land management, however, are simulated by Integrated Assessment Models that typically only have coarse representations of the influence of environmental changes on vegetation growth and are typically based on the demand for wood driven by regional population growth and energy needs. Here we present a study that provides the link between environmental conditions, forest growth and management. We extend the land component JSBACH of the Max Planck Institute's Earth system model (MPI-ESM) to simulate potential wood harvest in response to altered growth conditions and thus as adaptive to changing climate and CO2 conditions. We apply the altered model to estimate potential wood harvest for future climates (representative concentration pathways, RCPs) for the management scenario of
Bjelić Mišo B.
2016-01-01
Full Text Available Simulation models of welding processes allow us to predict influence of welding parameters on the temperature field during welding and by means of temperature field and the influence to the weld geometry and microstructure. This article presents a numerical, finite-difference based model of heat transfer during welding of thin sheets. Unfortunately, accuracy of the model depends on many parameters, which cannot be accurately prescribed. In order to solve this problem, we have used simulated annealing optimization method in combination with presented numerical model. This way, we were able to determine uncertain values of heat source parameters, arc efficiency, emissivity and enhanced conductivity. The calibration procedure was made using thermocouple measurements of temperatures during welding for P355GH steel. The obtained results were used as input for simulation run. The results of simulation showed that represented calibration procedure could significantly improve reliability of heat transfer model. [National CEEPUS Office of Czech Republic (project CIII-HR-0108-07-1314 and to the Ministry of Education and Science of the Republic of Serbia (project TR37020
Estevez H, O.; Duque, J. [Universidad de La Habana, Instituto de Ciencia y Tecnologia de Materiales, 10400 La Habana (Cuba); Rodriguez H, J. [UNAM, Instituto de Investigaciones en Materiales, 04510 Mexico D. F. (Mexico); Yee M, H., E-mail: oestevezh@yahoo.com [Instituto Politecnico Nacional, Escuela Superior de Fisica y Matematicas, 07738 Mexico D. F. (Mexico)
2015-07-01
1-Furoyl-3,3-diphenylthiourea (FDFT) was synthesized, and characterized by Ftir, {sup 1}H and {sup 13}C NMR and ab initio X-ray powder structure analysis. FDFT crystallizes in the monoclinic space group P2{sub 1} with a = 12.691(1), b = 6.026(2), c = 11.861(1) A, β = 117.95(2) and V = 801.5(3) A{sup 3}. The crystal structure has been determined from laboratory X-ray powder diffraction data using direct space global optimization strategy (simulated annealing) followed by the Rietveld refinement. The thiourea group makes a dihedral angle of 73.8(6) with the furoyl group. In the crystal structure, molecules are linked by van der Waals interactions, forming one-dimensional chains along the a axis. (Author)
Yanhui Li
2013-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Joseph, Joby; Muthukumaran, S. [National Institute of Technology, Tamil Nadu (India)
2016-01-15
Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters.
Bailing Liu
2015-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.
Banani Basu
2010-05-01
Full Text Available In this paper, we propose a technique based on two evolutionary algorithms simulated annealing and particle swarm optimization to design a linear array of half wavelength long parallel dipole antennas that will generate a pencil beam in the horizontal plane with minimum standing wave ratio (SWR and fixed side lobe level (SLL. Dynamic range ratio of current amplitude distribution is kept at a fixed value. Two different methods have been proposed withdifferent inter-element spacing but with same current amplitude distribution. First one uses a fixed geometry and optimizes the excitation distribution on it. In the second case further reduction of SWR is done via optimization of interelement spacing while keeping the amplitude distribution same as before. Coupling effect between the elements is analyzed using induced EMF method and minimized interms of SWR. Numerical results obtained from SA are validated by comparing with results obtained using PSO.
Kaplan, Sezgin; Rabadi, Ghaith
2013-01-01
This article addresses the aerial refuelling scheduling problem (ARSP), where a set of fighter jets (jobs) with certain ready times must be refuelled from tankers (machines) by their due dates; otherwise, they reach a low fuel level (deadline) incurring a high cost. ARSP is an identical parallel machine scheduling problem with release times and due date-to-deadline windows to minimize the total weighted tardiness. A simulated annealing (SA) and metaheuristic for randomized priority search (Meta-RaPS) with the newly introduced composite dispatching rule, apparent piecewise tardiness cost with ready times (APTCR), are applied to the problem. Computational experiments compared the algorithms' solutions to optimal solutions for small problems and to each other for larger problems. To obtain optimal solutions, a mixed integer program with a piecewise weighted tardiness objective function was solved for up to 12 jobs. The results show that Meta-RaPS performs better in terms of average relative error but SA is more efficient.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Web Mining Based on Hybrid Simulated Annealing Genetic Algorithm and HMM%基于混合模拟退火-遗传算法和HMM的Web挖掘
邹腊梅; 龚向坚
2012-01-01
The training algorithm which is used to training HMM is a sub-optimal algorithm and sensitive to initial parameters. Typical hidden Markov model often leads to sub-optimal when training it with random parameters. It is ineffective when mining Web information with typical HMM. GA has the excellent ability of global searching and has the defect of slow convergence rate. SA has the excellent ability of local searching and has the defect of randomly roaming. It combines the advantages of genetic algorithm and simulated annealing algorithm .proposes hybrid simulated annealing genetic algorithm (SGA). SGA chooses the best SGA parameters by experiment and optimizes HMM combining Baum-Welch during the course of Web mining. The experimental results show that the SGA significantly improves the performance in precision and recall.%隐马尔可夫模型训练算法是一种局部搜索算法,对初值敏感.传统方法采用随机参数训练隐马尔可夫模型时常陷入局部最优,应用于Web挖掘效果不佳.遗传算法具有较强的全局搜索能力,但容易早熟、收敛慢,模拟退火算法具有较强的局部寻优能力,但会随机漫游,全局搜索能力欠缺.综合考虑遗传算法和模拟退火算法的特点,提出混合模拟退火-遗传算法SGA,优化HMM初始参数,弥补Baum-Welch算法对初始参数敏感的缺陷,Web挖掘的实验结果表明五个域提取的REC和PRE都有明显的提高.
模拟退火蚁群算法求解二次分配问题%Simulated annealing ant colony algorithm for QAP.
朱经纬; 芮挺; 蒋新胜; 张金林
2011-01-01
A simulated annealing ant colony algorithm is presented to tackle the Quadratic Assignment Problem(QAP).The simulated annealing method is introduced to the ant colony algorithm.By setting the temperature which changes with the iterative,after each turn of circuit,the solution set got by the colony is taken as the candidate set,the update set is gotten by accepting the solutions in the candidate set with the probability determined by the temperature.The candidate set is used to update the trail information matrix.In each turn of updating the tail information,the best solution is used to enhance the tail information.The tail information matrix is reset when the algorithm is in stagnation.The computer experiments demonstrate this algorithm has high calculation stability and converging speed.%提出了一种求解二次分配问题的模拟退火蚁群算法.将模拟退火机制引入蚁群算法,在算法中设定随迭代变化的温度,将蚁群根据信息素矩阵搜索得到的解集作为候选集,根据当前温度按照模拟退火机制由候选集生成更新集,利用更新集更新信息素矩阵,并利用当前最优解对信息素矩阵进行强化.当算法出现停滞对信息素矩阵进行重置.实验表明,该算法有着高的稳定性与收敛速度.
Marco A. C. Benvenga
2011-10-01
Full Text Available Kinetic simulation and drying process optimization of corn malt by Simulated Annealing (SA for estimation of temperature and time parameters in order to preserve maximum amylase activity in the obtained product are presented here. Germinated corn seeds were dried at 54-76 °C in a convective dryer, with occasional measurement of moisture content and enzymatic activity. The experimental data obtained were submitted to modeling. Simulation and optimization of the drying process were made by using the SA method, a randomized improvement algorithm, analogous to the simulated annealing process. Results showed that seeds were best dried between 3h and 5h. Among the models used in this work, the kinetic model of water diffusion into corn seeds showed the best fitting. Drying temperature and time showed a square influence on the enzymatic activity. Optimization through SA showed the best condition at 54 ºC and between 5.6h and 6.4h of drying. Values of specific activity in the corn malt were found between 5.26±0.06 SKB/mg and 15.69±0,10% of remaining moisture.Este trabalho objetivou a simulação da cinética e a otimização do processo de secagem do malte de milho por meio da técnica Simulated Annealing (SA, para estimação dos parâmetros de temperatura e tempo, tais que mantenham a atividade máxima das enzimas amilases no produto obtido. Para tanto, as sementes de milho germinadas foram secas entre 54-76°C, em um secador convectivo de ar. De tempo em tempo, a umidade e a atividade enzimática foram medidas. Esses dados experimentais foram usados para testar os modelos. A simulação e a otimização do processo foram feitas por meio do método SA, um algoritmo de melhoria randômica, análogo ao processo de têmpera simulada. Os resultados mostram que as sementes estavam secas após 3 h ou 5 h de secagem. Entre os modelos usados, o modelo cinético de difusão da água através das sementes apresentou o melhor ajuste. O tempo e a temperatura
Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.
The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the
Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Liang, Faming
2010-04-08
In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.
Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.
2016-05-01
Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.
Balanced adaptive simulation of pollutant transport in Bay of Tangier
2014-01-01
A balanced adaptive scheme is proposed for the numerical solution of the coupled non-linear shallow water equations and depth-averaged advection-diffusion pollutant transport equation. The scheme uses the Roe approximate Riemann solver with centred discretization for advection terms and the Vazquez scheme for source terms. It is designed to handle non-uniform bed topography on triangular unstructured meshes, while satisfying the conservation property. Dynamic mesh adaptation criteria are base...
Schnepp, Sascha M
2011-01-01
A framework for performing dynamic mesh adaptation with the discontinuous Galerkin method (DGM) is presented. Adaptations include modifications of the local mesh step size (h-adaptation) and the local degree of the approximating polynomials (p-adaptation) as well as their combination. The computation of the approximation within locally adapted elements is based on projections between finite element spaces (FES), which are shown to preserve the upper limit of the electromagnetic energy. The formulation supports high level hanging nodes and applies precomputation of surface integrals for increasing computational efficiency. A full wave simulation of electromagnetic scattering form a radar reflector demonstrates the applicability to large scale problems in three-dimensional space.
Pilot Evaluation of Adaptive Control in Motion-Based Flight Simulator
Kaneshige, John T.; Campbell, Stefan Forrest
2009-01-01
The objective of this work is to assess the strengths, weaknesses, and robustness characteristics of several MRAC (Model-Reference Adaptive Control) based adaptive control technologies garnering interest from the community as a whole. To facilitate this, a control study using piloted and unpiloted simulations to evaluate sensitivities and handling qualities was conducted. The adaptive control technologies under consideration were ALR (Adaptive Loop Recovery), BLS (Bounded Linear Stability), Hybrid Adaptive Control, L1, OCM (Optimal Control Modification), PMRAC (Predictor-based MRAC), and traditional MRAC
Jia, F.; Lichti, D.
2017-09-01
The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.
Computerized adaptive measurement of depression: A simulation study
Mammen Oommen
2004-05-01
Full Text Available Abstract Background Efficient, accurate instruments for measuring depression are increasingly important in clinical practice. We developed a computerized adaptive version of the Beck Depression Inventory (BDI. We examined its efficiency and its usefulness in identifying Major Depressive Episodes (MDE and in measuring depression severity. Methods Subjects were 744 participants in research studies in which each subject completed both the BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale. Results The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%, equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21 items. The adaptive latent depression score correlated r = .92 with the BDI total score and the latent depression score correlated more highly with the Hamilton (r = .74 than the BDI total score did (r = .70. Conclusions Adaptive testing for depression may provide greatly increased efficiency without loss of accuracy in identifying MDE or in measuring depression severity.
Fonville, Judith M., E-mail: j.fonville07@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Bylesjoe, Max, E-mail: max.bylesjo@almacgroup.com [Almac Diagnostics, 19 Seagoe Industrial Estate, Craigavon BT63 5QD (United Kingdom); Coen, Muireann, E-mail: m.coen@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Nicholson, Jeremy K., E-mail: j.nicholson@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Holmes, Elaine, E-mail: elaine.holmes@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Lindon, John C., E-mail: j.lindon@imperial.ac.uk [Biomolecular Medicine, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Sir Alexander Fleming Building, South Kensington, London SW7 2AZ (United Kingdom); Rantalainen, Mattias, E-mail: rantalai@stats.ox.ac.uk [Department of Statistics, Oxford University, 1 South Parks Road, Oxford OX1 3TG (United Kingdom)
2011-10-31
Highlights: {yields} Non-linear modeling of metabonomic data using K-OPLS. {yields} automated optimization of the kernel parameter by simulated annealing. {yields} K-OPLS provides improved prediction performance for exemplar spectral data sets. {yields} software implementation available for R and Matlab under GPL v2 license. - Abstract: Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and
Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture
Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne
2017-01-01
This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…
Developing adaptive user interfaces using a game-based simulation environment
Brake, G.M. te; Greef, T.E. de; Lindenberg, J.; Rypkema, J.A.; Smets-Noor, N.J.J.M.
2006-01-01
In dynamic settings, user interfaces can provide more optimal support if they adapt to the context of use. Providing adaptive user interfaces to first responders may therefore be fruitful. A cognitive engineering method that incorporates development iterations in both a simulated and a real-world en
Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej
2016-01-01
The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to
Stauffer, D.; Arndt, H.
Can unicellular organisms survive a drastic temperature change, and adapt to it after many generations? In simulations of the Penna model of biological aging, both extinction and adaptation were found for asexual and sexual reproduction as well as for parasex. These model investigations are the basis for the design of evolution experiments with heterotrophic flagellates.
Parallel Mesh Adaptive Techniques for Complex Flow Simulation: Geometry Conservation
Angelo Casagrande
2012-01-01
Full Text Available Dynamic mesh adaptation on unstructured grids, by localised refinement and derefinement, is a very efficient tool for enhancing solution accuracy and optimising computational time. One of the major drawbacks, however, resides in the projection of the new nodes created, during the refinement process, onto the boundary surfaces. This can be addressed by the introduction of a library capable of handling geometric properties given by a CAD (computer-aided design description. This is of particular interest also to enhance the adaptation module when the mesh is being smoothed, and hence moved, to then reproject it onto the surface of the exact geometry.
赵敬和; 谢玲
2011-01-01
针对旅行商问题（TSP）具有的易于描述却难以处理的NP完全难题、其可能的路径数目与城市数目是呈指数型增长的、求解困难的特点。本文首次采用LabVIEW仿真实现模拟退火算法来求解该问题。仿真结果表明LabVIEW独有的数组运算规则可有效的实现该算法求解TSP问题，相比较其它方法，该方法更简单、实用、计算精度高、速度快，并且适合任意城市数目的TSP问题。%For the NP-complete hard problem which is easy to be described,but hard to be solved and the possible amounts of path increase exponentially with the amounts of city in Traveling Salesman Problem ,both resulting TSP is difficult to solve,this paper uses Simulated Annealing based on LabVIEW simulation to solve the problem for the first time. LabVIEW simulation results show that its unique array algorithms can effectively implement the Simulated annealing for TSP.Compared to other methods,this method is more simple,more practical and more precise.In addition, it has higher speed and is suitable for the TSP with any number of cities.
Computer simulation program is adaptable to industrial processes
Schultz, F. E.
1966-01-01
The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.
The adaptation method in the Monte Carlo simulation for computed tomography
Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)
2015-06-15
The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
The adaptation method in the Monte Carlo simulation for computed tomography
Hyounggun Lee
2015-06-01
Full Text Available The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT. To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA and a human-like voxel phantom (KTMAN-2 (Los Alamos National Laboratory, Los Alamos, NM, USA. For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations—assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
Toward adaptive VR simulators combining visual, haptic, and brain-computer interfaces.
Lécuyer, Anatole; George, Laurent; Marchal, Maud
2013-01-01
The next generation of VR simulators could take into account a novel input: the user's mental state, as measured with electrodes and a brain-computer interface. One illustration of this promising path is a project that adapted a guidance system's force feedback to the user's mental workload in real time. A first application of this approach is a medical training simulator that provides virtual assistance that adapts to the trainee's mental activity. Such results pave the way to VR systems that will automatically reconfigure and adapt to their users' mental states and cognitive processes.
Photovoltaic Power Prediction Based on Scene Simulation Knowledge Mining and Adaptive Neural Network
Dongxiao Niu
2013-01-01
Full Text Available Influenced by light, temperature, atmospheric pressure, and some other random factors, photovoltaic power has characteristics of volatility and intermittent. Accurately forecasting photovoltaic power can effectively improve security and stability of power grid system. The paper comprehensively analyzes influence of light intensity, day type, temperature, and season on photovoltaic power. According to the proposed scene simulation knowledge mining (SSKM technique, the influencing factors are clustered and fused into prediction model. Combining adaptive algorithm with neural network, adaptive neural network prediction model is established. Actual numerical example verifies the effectiveness and applicability of the proposed photovoltaic power prediction model based on scene simulation knowledge mining and adaptive neural network.
齐小刚; 王云鹤
2011-01-01
为解决Hopfield神经网络应用过程中参数设置的问题,在研究Hopfield神经网络的工作原理的基础上,分析了神经网络模型在求解TSP(Traveling Salesman Problem)问题过程中参数的选取,通过对输出数据进行归一化处理建立网络的评价函数,然后引入模拟退火算法对参数进行最优化选取.实验结果表明,经过参数优化过的Hopfield神经网络模型能更有效,更快速地得到TSP问题的最优解.%In order to solve the parameter setting problem during the application process of Hopfield neural network. The working principle of Hopfield neural network is described, the neural network model parameter selection problem in the TSP ( Traveling Salesman Problem) problems solving process is analyed On the basis established the evaluation function of network by using normalized on output data, and then use simulated annealing algorithm to select the optimal parameters. The results show that, after optimization of parameters, Hopfield neural network can obtain the optimal solution of TSP problems more effective and more quickly.
Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C
2015-01-01
The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature.
Yu Lin
2015-01-01
Full Text Available In recent years, logistics systems with multiple suppliers and plants in neighboring regions have been flourishing worldwide. However, high logistics costs remain a problem for such systems due to lack of information sharing and cooperation. This paper proposes an extended mathematical model that minimizes transportation and pipeline inventory costs via the many-to-many Milk-run routing mode. Because the problem is NP hard, a two-stage heuristic algorithm is developed by comprehensively considering its characteristics. More specifically, an initial satisfactory solution is generated in the first stage through a greedy heuristic algorithm to minimize the total number of vehicle service nodes and the best insertion heuristic algorithm to determine each vehicle’s route. Then, a simulated annealing algorithm (SA with limited search scope is used to improve the initial satisfactory solution. Thirty numerical examples are employed to test the proposed algorithms. The experiment results demonstrate the effectiveness of this algorithm. Further, the superiority of the many-to-many transportation mode over other modes is demonstrated via two case studies.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
ASHWIN MISHRA,
2011-01-01
Full Text Available In this study singularity analysis of the six degree of freedom (DOF Stewart Platform using the various heuristic methods in a specified design configuration has been carried out .The Jacobian matrix of the Stewart platform is obtained and the absolute value of the determinant of the Jacobian is taken as the objective function, and the least value of this objective function is fished in the reachable workspace of the Stewart platform so as to find the singular configurations. The singular configurations of the platform depend on the value of this objective function under consideration, if it is zero the configuration is singular. The results thus obtained by different methods namely the genetic algorithm, Particle Swarm optimization and variants and simulated annealing are compared with each other. The variable sets considered are the respective desirable platform motions in the form of translation and rotation in six degrees of freedom. This paper hence presents a proper comparative study of these algorithms based on the results that are obtained and highlights the advantage of each in terms of computational cost and accuracy.
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
基于模拟退火算法的全国最优旅行方案%Optimal Nationwide Traveling Scheme Based on Simulated Annealing Algorithm
吕鹏举; 原杰; 吕菁华
2011-01-01
An optimal itinerary scheme to travel through provincial capitals, municipalities, Hong Kong, Macao, Taiwan is designed.The practical problems of the shortest path and least cost for travelling to the above places are analyzed.Taking account of the relationship of cost, route, duration and transportation, a model is established.The simulated annealing algorithm is adopted to solve the model.A travel path of saving money and time is obtained by a comprehensive consideration.The results show the correctness of this travel program and practical value.%以如何走遍全国各省会、直辖市、香港、澳门、台北为基础设计旅行方案,对旅行时的路径最短,费用最少等现实问题进行分析,在充分考虑旅行费用与路线,时间与交通工具的关系后,以实现路径最短与费用时间最少为目标,进行系统建模,并应用模拟退火算法对模型进行求解,得出了一条综合考虑省钱、省时的旅行路径.结果表明了该旅行方案的正确性和现实价值.
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
A Model for Capturing Team Adaptation in Simulated Emergencies
Paltved, Charlotte; Musaeus, Peter
2013-01-01
Introduction/Background: Acute critical situations and emergencies are among the most challenging situations in medicine where acute care teams are often constituted on an ad hoc basis. In such types of teams, it is obvious that excellent performance depends on the ability of the team to function...... events like closed-loop communication.1 A more nuanced understanding of team communication has the potential to enhance scholarship in interprofessional endeavours. In high risk environments, team performance depends on the ability of teams to quickly alter actions in response to rapidly changing...... and reviewed. The research design used an explorative case study methodology to answer the research question: Which factors most strongly mediate adaptive team performance? Results: Through an iterative, inductive process, data supported the building of the Team Adaptation Tool (TATool) that captures...
Enzo+Moray: Radiation Hydrodynamics Adaptive Mesh Refinement Simulations with Adaptive Ray Tracing
Wise, John H
2010-01-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray tracing scheme, and its parallel implementation into the adaptive mesh refinement (AMR) cosmological hydrodynamics code, Enzo. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilised to study a broad range of astrophysical problems, such as stellar and black hole (BH) feedback. Inaccuracies can arise from large timesteps and poor sampling, therefore we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. (2006, 2009). We further test our method with more dynamical situations, for example, the propagation of an ionisation front through a Rayleigh-Taylor instability, time-varying luminosities, and collimated radiation. The test suite also includes an...
Wavelet-Based Adaptive Solvers on Multi-core Architectures for the Simulation of Complex Systems
Rossinelli, Diego; Bergdorf, Michael; Hejazialhosseini, Babak; Koumoutsakos, Petros
We build wavelet-based adaptive numerical methods for the simulation of advection dominated flows that develop multiple spatial scales, with an emphasis on fluid mechanics problems. Wavelet based adaptivity is inherently sequential and in this work we demonstrate that these numerical methods can be implemented in software that is capable of harnessing the capabilities of multi-core architectures while maintaining their computational efficiency. Recent designs in frameworks for multi-core software development allow us to rethink parallelism as task-based, where parallel tasks are specified and automatically mapped into physical threads. This way of exposing parallelism enables the parallelization of algorithms that were considered inherently sequential, such as wavelet-based adaptive simulations. In this paper we present a framework that combines wavelet-based adaptivity with the task-based parallelism. We demonstrate good scaling performance obtained by simulating diverse physical systems on different multi-core and SMP architectures using up to 16 cores.
Fuzzy Backstepping Torque Control Of Passive Torque Simulator With Algebraic Parameters Adaptation
Ullah, Nasim; Wang, Shaoping; Wang, Xingjian
2015-07-01
This work presents fuzzy backstepping control techniques applied to the load simulator for good tracking performance in presence of extra torque, and nonlinear friction effects. Assuming that the parameters of the system are uncertain and bounded, Algebraic parameters adaptation algorithm is used to adopt the unknown parameters. The effect of transient fuzzy estimation error on parameters adaptation algorithm is analyzed and the fuzzy estimation error is further compensated using saturation function based adaptive control law working in parallel with the actual system to improve the transient performance of closed loop system. The saturation function based adaptive control term is large in the transient time and settles to an optimal lower value in the steady state for which the closed loop system remains stable. The simulation results verify the validity of the proposed control method applied to the complex aerodynamics passive load simulator.
Simulating Computer Adaptive Testing With the Mood and Anxiety Symptom Questionnaire
G. Flens; N. Smits; I. Carlier; A.M. van Hemert; E. de Beurs
2015-01-01
In a post hoc simulation study (N = 3,597 psychiatric outpatients), we investigated whether the efficiency of the 90-item Mood and Anxiety Symptom Questionnaire (MASQ) could be improved for assessing clinical subjects with computerized adaptive testing (CAT). A CAT simulation was performed on each o
Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.
2016-07-01
The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.
Ufa Ruslan A.
2015-01-01
Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of intelligent electric power systems with active-adaptive electric networks (IES including Flexible Alternating Current Transmission System (FACTS devices. The key requirements for the simulation were formed. The presented analysis of simulation results of IES confirms the need to use a hybrid modelling approach.
Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation
Van Esch, J.M.
2010-01-01
Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses
Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation
Van Esch, J.M.
2010-01-01
Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses
Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator
Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.
2017-05-01
Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.
许闻清; 陈剑
2011-01-01
针对遗传算法和模拟退火算法各自的优缺点,研究将两者联合起来,并通过动态调节交叉概率和变异概率来防止遗传算法的早熟现象,形成改进的遗传模拟退火算法,并将其应用于动力总成悬置系统的优化.%Owing to merits and demerits of genetic algorithm and simulated annealing algorithm,the two algorithms were combined.As the probabilities of crossover and mutation were dynamically adopted to overcome the premature phenomenon of genetic algorithm, A improved genetic simulated annealing algorithm was formed and used in the optimization of powertrain mounting system.
ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing
Wise, John H.; Abel, Tom
2011-07-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.
Simulation Data Management for Adaptive Design Of Experiment
BLONDET, Gaëtan; BOUDAOUD, Nassim; LE DUIGOU, Julien
2015-01-01
International audience; Recent evolutions of computer-aided product development and massive integration of numerical simulations to the design process require new methodologies to decrease the computational costs. Numerical design of experiments is used to increase quality of products by taking into account uncertainties in product development. But, this method can be time-consuming and involves a high computational cost. This paper presents a literature review of design of experiments method...
Toward parallel, adaptive mesh refinement for chemically reacting flow simulations
Devine, K.D.; Shadid, J.N.; Salinger, A.G. Hutchinson, S.A. [Sandia National Labs., Albuquerque, NM (United States); Hennigan, G.L. [New Mexico State Univ., Las Cruces, NM (United States)
1997-12-01
Adaptive numerical methods offer greater efficiency than traditional numerical methods by concentrating computational effort in regions of the problem domain where the solution is difficult to obtain. In this paper, the authors describe progress toward adding mesh refinement to MPSalsa, a computer program developed at Sandia National laboratories to solve coupled three-dimensional fluid flow and detailed reaction chemistry systems for modeling chemically reacting flow on large-scale parallel computers. Data structures that support refinement and dynamic load-balancing are discussed. Results using uniform refinement with mesh sequencing to improve convergence to steady-state solutions are also presented. Three examples are presented: a lid driven cavity, a thermal convection flow, and a tilted chemical vapor deposition reactor.
WISE An Adaptative Simulation of the LHC Optics
Hagen, P; Koutchouk, Jean-Pierre; Risselada, Thys; Sanfilippo, S; Todesco, E; Wildner, E
2006-01-01
The beam dynamics in LHC requires a tight control of the field quality and geometry of the magnets. As the production advances, decisions have to be made on the acceptance of possible imperfections. To ease decision making, an adaptative model of the LHC optics has been built, based on the information available on the day (e.g. magnetic measurements at warm or cold, magnet allocation to machine slots) as well as on statistical evaluations for the missing information (e.g. magnets yet to be built, measured, or for non-allocated slots). The uncertainties are included: relative and absolute measurement errors, warm-to-cold correlations for the fraction of magnets not measured at cold, hysteresis and power supply accuracy. The pre-processor WISE generates instances of the LHC field errors for the MAD-X program, with the possibility of selecting various sources. We present an application to estimate the expected beta-beating.
Block-Structured Adaptive Mesh Refinement Algorithms for Vlasov Simulation
Hittinger, J A F
2012-01-01
Direct discretization of continuum kinetic equations, like the Vlasov equation, are under-utilized because the distribution function generally exists in a high-dimensional (>3D) space and computational cost increases geometrically with dimension. We propose to use high-order finite-volume techniques with block-structured adaptive mesh refinement (AMR) to reduce the computational cost. The primary complication comes from a solution state comprised of variables of different dimensions. We develop the algorithms required to extend standard single-dimension block structured AMR to the multi-dimension case. Specifically, algorithms for reduction and injection operations that transfer data between mesh hierarchies of different dimensions are explained in detail. In addition, modifications to the basic AMR algorithm that enable the use of high-order spatial and temporal discretizations are discussed. Preliminary results for a standard 1D+1V Vlasov-Poisson test problem are presented. Results indicate that there is po...
The numerical simulation tool for the MAORY multiconjugate adaptive optics system
Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.
2016-07-01
The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.
The numerical simulation tool for the MAORY multiconjugate adaptive optics system
Arcidiacono, Carmelo; Bregoli, Giovanni; Diolaiti, Emiliano; Foppiani, Italo; Agapito, Guido; Puglisi, Alfio; Xompero, Marco; Oberti, Sylvain; Cosentino, Giuseppe; Lombini, Matteo; Butler, Chris R; Ciliegi, Paolo; Cortecchia, Fausto; Patti, Mauro; Esposito, Simone; Feautrier, Philippe
2016-01-01
The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is a hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implements the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and uses libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.
Motion sickness adaptation to Coriolis-inducing head movements in a sustained G flight simulator.
Newman, Michael C; McCarthy, Geoffrey W; Glaser, Scott T; Bonato, Frederick; Bubka, Andrea
2013-02-01
Technological advances have allowed centrifuges to become more than physiological testing and training devices; sustained G, fully interactive flight simulation is now possible. However, head movements under G can result in vestibular stimulation that can lead to motion sickness (MS) symptoms that are potentially distracting, nauseogenic, and unpleasant. In the current study an MS adaptation protocol was tested for head movements under +Gz. Experienced pilots made 14 predetermined head movements in a sustained G flight simulator (at 3 +Gz) on 5 consecutive days and 17 d after training. Symptoms were measured after each head turn using a subjective 0-10 MS scale. The Simulator Sickness Questionnaire (SSQ) was also administered before and after each daily training session. After five daily training sessions, normalized mean MS scores were 58% lower than on Day 1. Mean total, nausea, and disorientation SSQ scores were 55%, 52%, and 78% lower, respectively. During retesting 17 d after training, nearly all scores indicated 90-100% retention of training benefits. The reduction of unpleasant effects associated with sustained G flight simulation using an adaptation training protocol may enhance the effectiveness of simulation. Practical use of sustained G simulators is also likely to be interspersed with other types of ground and in-flight training. Hence, it would be undesirable and unpleasant for trainees to lose adaptation benefits after a short gap in centrifuge use. However, current results suggest that training gaps in excess of 2 wk may be permissible with almost no loss of adaptation training benefits.
Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest
2016-05-01
In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.
时空模型结合模拟退火进行脑磁源的定位%Spatio-Temporal MEG Source Localization Using Simulated Annealing
霍小林; 李军; 刘正东
2001-01-01
Locating the sources of brain magnetic fields is a basic problem of magnetoencephalography (MEG). The locating of multiple current dipole is a difficult problem for the inverse study of MEG. A method combining Spatio-Temporal Source Modeling with Simulated Annealing to locate multiple current dipoles, is presented through studying the STSM of MEG.This method can overcome the shortcoming of other optimal methods to avoid being trapped in a local minimum. The dipole parameters can be separated into linear and nonlinear components. The optimization dimensions can be reduced greatly by just optimizing the nonlinear components only. Compared with the MUSIC (MUltiple Signal Classification), this method can cut down requirements of independence of the dipole sources correspondingly.%脑磁源的定位问题是脑磁图(magnetoencephalography, MEG)研究的一个基本问题。其中多偶极子定位是脑磁逆问题研究当中的难点。本文通过研究脑磁图的时空模型STSM (spatio-temporal source modeling),提出将时空模型与模拟退火相结合进行多偶极子的定位，以克服其他优化方法易落入局部极小的不足。时空模型中偶极子参数经分解可分为线性部分和非线形部分，只对非线性部分进行模拟退火优化大大降低了优化空间的维数。通过与MUSIC (MUltiple SIgnal Classification)方法的比较，发现将时空模型与模拟退火相结合可以相对降低对源信号独立性的要求。
BENDING RAY-TRACING BASED ON SIMULATED ANNEALING METHOD%基于模拟退火法的弯曲射线追踪
周竹生; 谢金伟
2011-01-01
This paper proposes a new ray-tracing method based on the concept of simulated annealing. With the new method, not only the problem that the traditional ray-tracing method is over dependent on pre - established initial ray-paths is well solved, but also the quality of desirable ray-paths construction and the associated traveltime calculation between fixed sources and receivers is ensured, even if the model is of much complicated velocity-field. As a result, the ray-paths whose traveltime approach is overall minimum are searched out successfully. Furthermore, the algorithm may calculate ray-paths with local extreme lower traveltime too and restrict them easily by instructing rays to pass through some fixed points. The feasibility and stability of the method have been proved by trial results of theoretical models.%提出了一种新的射线追踪方法——模拟退火法.新方法不仅较好地解决了传统射线追踪方法过分依赖初始模型的问题,而且对于复杂速度场模型也能保证在固定的发射与接收点之间构建令人满意的射线路径及其相应的走时,搜索到满足旅行时全局最小的射线路径.此外,新方法还可计算局部最小旅行时,并可方便地通过指定射线经过固定点来对射线路径进行限制.理论模型的试算结果证明了该方法的可行性和稳健性.
Maikel Méndez-Morales
2014-09-01
Full Text Available En este artículo se presenta la aplicación del algoritmo Simulated Annealing (SA en el diseño óptimo de un sistema de distribución de agua (SDA. El SA es un algoritmo metaheurístico de búsqueda, basado en una analogía entre el proceso de recocido en metales (proceso controlado de enfriamiento de un cuerpo y la solución de problemas de optimización combinatorios. El algoritmo SA, junto con diversos modelos matemáticos, ha sido utilizado exitosamente en el óptimo diseño de SDA. Como caso de estudio se utilizó el SDA a escala real de la comunidad de Marsella, en San Carlos, Costa Rica. El algoritmo SA fue implementado mediante el conocido modelo EPANET, a través de la extensión WaterNetGen. Se compararon tres diferentes variaciones automatizadas del algoritmo SA con el diseño manual del SDA Marsella llevado a cabo a prueba y error, utilizando únicamente costos unitarios de tuberías. Los resultados muestran que los tres esquemas automatizados del SA arrojaron costos unitarios por debajo del 0.49 como fracción, respecto al costo original del esquema de diseño ejecutado a prueba y error. Esto demuestra que el algoritmo SA es capaz de optimizar problemas combinatorios ligados al diseño de mínimo costo de los sistemas de distribución de agua a escala real.
Biswas, A.; Sharma, S. P.
2012-12-01
Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the
Fonville, Judith M; Bylesjö, Max; Coen, Muireann; Nicholson, Jeremy K; Holmes, Elaine; Lindon, John C; Rantalainen, Mattias
2011-10-31
Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of Trypanosoma brucei brucei infection. Automated and user-friendly procedures for the kernel-optimization have been incorporated into version 1.1.1 of the freely available K-OPLS software package for both R and Matlab to enable easy application of K-OPLS for non-linear prediction modeling.
Morton, Gerard C; Sankreacha, Raxa; Halina, Patrick; Loblaw, Andrew
2008-01-01
Dose distribution in a high-dose-rate (HDR) brachytherapy implant is optimized by adjusting source dwell positions and dwell times along the implanted catheters. Inverse planning with fast simulated annealing (IPSA) is a recently developed algorithm for anatomy-based inverse planning, capable of generating an optimized plan in less than 1min. The purpose of this study is to compare dose distributions achieved using IPSA to those obtained with a graphical optimization (GrO) algorithm for prostate HDR brachytherapy. This is a retrospective study of 63 consecutive prostate HDR brachytherapy implants planned and treated using on-screen GrO to a dose of 10Gy per implant. All plans were then recalculated using IPSA, without changing any parameters (contours, catheters, number, or location of dwell positions). The IPSA and GrO plans were compared with respect to target coverage, conformality, dose homogeneity, and normal tissue dose. The mean volume of target treated to 100% of prescription dose (V(100)) was 97.1% and 96.7%, and mean Conformal Index 0.71 and 0.68 with GrO and IPSA, respectively. IPSA plans had a higher mean homogeneity index (0.69 vs. 0.63, p<0.001) and lower volume of target receiving 150% (30.2% vs. 35.6%, p<0.001) and 200% (10.7% vs. 12.7%, p<0.001) of the prescription dose. Mean dose to urethra, rectum, and bladder were all significantly lower with IPSA (p<0.001). IPSA plans tended to be more reproducible, with smaller standard deviations for all measured parameters. Plans generated using IPSA provide similar target coverage to those obtained using GrO but with lower dose to normal structures and greater dose homogeneity.
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David
2012-04-11
A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling
Supply-chain management based on simulated annealing algorithm%基于模拟退火算法的供应链管理分析
董雪
2012-01-01
随着经济全球化的到来，更多的企业将工作重心放在其核心竞争力上，而物流业务也逐渐从生产加工等业务中分离出来。因此，如何有效管理供应商和生产商之间的关系（即供应链管理）已成为当前企业竞争和收益的焦点。以往对于供应链模型的求解往往是基于遗传算法等，虽然成熟有效，但局部搜索能力较差并且计算时间较长。主要应用模拟退火算法对供应链模型的求解问题进行研究和分析，并结合例题说明其有效性。%With the economic globalization, more and more enterprises focus on their core competitiveness. So the logistics operation has been gradually into various fairly independent unit. Therefore, how to manage the relations between suppliers and producers effectively (supply-chain management) becomes a hot topic. The former solutions to the supply-chain model have been based on genetic algorithm,which func- tion is mature and effective, but poor in local search ability and longer for computing time. It provides a simulated annealing algorithm to solve the supply-chain management model, and an example will be given to show its effectiveness.
Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim
2015-09-01
We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modernizing quantum annealing using local searches
Chancellor, Nicholas
2017-02-01
I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques.
Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Ng, C. S.; Bhattacharjee, A.
2006-10-01
A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate the problem of island coalescence instability (ICI) in 2D. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described in [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys., 215, 59-80 (2006)], allowing both statically refined and dynamically refined grids. ICI is a MHD process that can produce strong current sheets and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [cf., Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Thus, it is desirable to use adaptive refinement grids to increase resolution, and to maintain accuracy at the same time. Results are compared with simulations using finite difference method with the same refinement grid, as well as pesudo-spectral simulations using uniform grid.
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
Cluster Optimization and Parallelization of Simulations with Dynamically Adaptive Grids
Schreiber, Martin
2013-01-01
The present paper studies solvers for partial differential equations that work on dynamically adaptive grids stemming from spacetrees. Due to the underlying tree formalism, such grids efficiently can be decomposed into connected grid regions (clusters) on-the-fly. A graph on those clusters classified according to their grid invariancy, workload, multi-core affinity, and further meta data represents the inter-cluster communication. While stationary clusters already can be handled more efficiently than their dynamic counterparts, we propose to treat them as atomic grid entities and introduce a skip mechanism that allows the grid traversal to omit those regions completely. The communication graph ensures that the cluster data nevertheless are kept consistent, and several shared memory parallelization strategies are feasible. A hyperbolic benchmark that has to remesh selected mesh regions iteratively to preserve conforming tessellations acts as benchmark for the present work. We discuss runtime improvements resulting from the skip mechanism and the implications on shared memory performance and load balancing. © 2013 Springer-Verlag.
Role-play simulations for climate change adaptation education and engagement
Rumore, Danya; Schenk, Todd; Susskind, Lawrence
2016-08-01
In order to effectively adapt to climate change, public officials and other stakeholders need to rapidly enhance their understanding of local risks and their ability to collaboratively and adaptively respond to them. We argue that science-based role-play simulation exercises -- a type of 'serious game' involving face-to-face mock decision-making -- have considerable potential as education and engagement tools for enhancing readiness to adapt. Prior research suggests role-play simulations and other serious games can foster public learning and encourage collective action in public policy-making contexts. However, the effectiveness of such exercises in the context of climate change adaptation education and engagement has heretofore been underexplored. We share results from two research projects that demonstrate the effectiveness of role-play simulations in cultivating climate change adaptation literacy, enhancing collaborative capacity and facilitating social learning. Based on our findings, we suggest such exercises should be more widely embraced as part of adaptation professionals' education and engagement toolkits.
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
Optimal Control Problem of Feeding Adaptations of Daphnia and Neural Network Simulation
Kmet', Tibor; Kmet'ov, Mria
2010-09-01
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints and open final time. The optimal control problem is transcribed into nonlinear programming problem, which is implemented with adaptive critic neural network [9] and recurrent neural network for solving nonlinear proprojection equations [10]. The proposed simulation methods is illustrated by the optimal control problem of feeding adaptation of filter feeders of Daphnia. Results show that adaptive critic based systematic approach and neural network solving of nonlinear equations hold promise for obtaining the optimal control with control and state constraints and open final time.
EVENT-DRIVEN SIMULATION OF INTEGRATE-AND-FIRE MODELS WITH SPIKE-FREQUENCY ADAPTATION
Lin Xianghong; Zhang Tianwen
2009-01-01
The evoked spike discharges of a neuron depend critically on the recent history of its electrical activity. A well-known example is the phenomenon of spike-frequency adaptation that is a commonly observed property of neurons. In this paper, using a leaky integrate-and-fire model that includes an adaptation current, we propose an event-driven strategy to simulate integrate-and-fire models with spike-frequency adaptation. Such approach is more precise than traditional clock-driven numerical integration approach because the timing of spikes is treated exactly. In experiments, using event-driven and clock-driven strategies we simulated the adaptation time course of single neuron and the random network with spike-timing dependent plasticity, the results indicate that (1) the temporal precision of spiking events impacts on neuronal dynamics of single as well as network in the different simulation strategies and (2) the simulation time scales linearly with the total number of spiking events in the event-driven simulation strategies.
Resolution-Adapted All-Atomic and Coarse-Grained Model for Biomolecular Simulations.
Shen, Lin; Hu, Hao
2014-06-10
We develop here an adaptive multiresolution method for the simulation of complex heterogeneous systems such as the protein molecules. The target molecular system is described with the atomistic structure while maintaining concurrently a mapping to the coarse-grained models. The theoretical model, or force field, used to describe the interactions between two sites is automatically adjusted in the simulation processes according to the interaction distance/strength. Therefore, all-atomic, coarse-grained, or mixed all-atomic and coarse-grained models would be used together to describe the interactions between a group of atoms and its surroundings. Because the choice of theory is made on the force field level while the sampling is always carried out in the atomic space, the new adaptive method preserves naturally the atomic structure and thermodynamic properties of the entire system throughout the simulation processes. The new method will be very useful in many biomolecular simulations where atomistic details are critically needed.
Monsalve, A.; Artigas, A.; Celentano, D.; Melendez, F.
2004-07-01
The heating and cooling curves during batch annealing process of low carbon steel have been modeled using the finite element technique. This has allowed to predict the transient thermal profile for every point of the annealed coils, particularly for the hottest and coldest ones. Through experimental measurements, the results have been adequately validated since a good agreement has been found between experimental values and those predicted by the model. Moreover, an Avrami recrystallization model. Moreover, and Avrami recrystallization model has been coupled to this thermal balance computation. Interrupted annealing experiments have been made by measuring the recrystallized fraction on the extreme points of the coil foe different times. These data gave the possibility to validate the developed recrystallization model through a reasonably good numerical-experimental fittings. (Author) 6 refs.
Dynamically adaptive Lattice Boltzmann simulation of shallow water flows with the Peano framework
Neumann, Philipp
2015-09-01
© 2014 Elsevier Inc. All rights reserved. We present a dynamically adaptive Lattice Boltzmann (LB) implementation for solving the shallow water equations (SWEs). Our implementation extends an existing LB component of the Peano framework. We revise the modular design with respect to the incorporation of new simulation aspects and LB models. The basic SWE-LB implementation is validated in different breaking dam scenarios. We further provide a numerical study on stability of the MRT collision operator used in our simulations.
Sagert, I; Fattoyev, F J; Postnikov, S; Horowitz, C J
2015-01-01
Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. In this work, we present proof-of-principle 3D Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). We perform benchmark studies of $^{16} \\mathrm{O}$, $^{208} \\mathrm{Pb}$ and $^{238} \\mathrm{U}$ nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so...
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)
2014-10-01
Quadrature Phase Shift Keying (SOQPSK), Orthogonal Frequency Division Multiplexing (OFDM), Bit Error Rate, ( BER ) 16. SECURITY CLASSIFICATION OF...Example: Link-Dependent Adaptive Radio • Other Applications: • Tradeoffs of Phased Array Antennas • Utility of Multiple access schemes • Performance...GTRI_B-‹#› Simulation Framework Architecture 5 • Object-oriented MATLAB to maximize reusability and flexibility Phase
Large-scale microstructural simulation of load-adaptive bone remodeling in whole human vertebrae
Badilatti, Sandro D.; Christen, Patrik; Levchuk, Alina; Hazrati Marangalou, Javad; Rietbergen, van Bert; Parkinson, Ian; Müller, Ralph
2016-01-01
Identification of individuals at risk of bone fractures remains challenging despite recent advances in bone strength assessment. In particular, the future degradation of the microstructure and load adaptation has been disregarded. Bone remodeling simulations have so far been restricted to small-volu
3D Simulation of Flow with Free Surface Based on Adaptive Octree Mesh System
Li Shaowu; Zhuang Qian; Huang Xiaoyun; Wang Dong
2015-01-01
The technique of adaptive tree mesh is an effective way to reduce computational cost through automatic adjustment of cell size according to necessity. In the present study, the 2D numerical N-S solver based on the adaptive quadtree mesh system was extended to a 3D one, in which a spatially adaptive octree mesh system and multiple parti-cle level set method were adopted for the convenience to deal with the air-water-structure multiple-medium coexisting domain. The stretching process of a dumbbell was simulated and the results indicate that the meshes are well adaptable to the free surface. The collapsing process of water column impinging a circle cylinder was simulated and from the results, it can be seen that the processes of fluid splitting and merging are properly simulated. The interaction of sec-ond-order Stokes waves with a square cylinder was simulated and the obtained drag force is consistent with the result by the Morison’s wave force formula with the coefficient values of the stable drag component and the inertial force component being set as 2.54.
Woudt, Edwin; de Boer, Pieter-Tjerk; van Ommeren, Jan C.W.
2007-01-01
Previous work on state-dependent adaptive importance sampling techniques for the simulation of rare events in Markovian queueing models used either no smoothing or a parametric smoothing technique, which was known to be non-optimal. In this paper, we introduce the use of kernel smoothing in this con
Largenet2: an object-oriented programming library for simulating large adaptive networks
Zschaler, Gerd
2012-01-01
The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://rincedd.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License.
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Kartsan, I. N.; Tyapkin, V. N.; Dmitriev, D. D.; Goncharov, A. E.; Zelenkov, P. V.; Kovalev, I. V.
2016-11-01
This paper considers the simulation of adaptive nulling mechanism patterns in hybrid reflector antenna systems with a 19-element feed element, in which the radiation pattern is formed as a cluster. Incidents of broadband and narrowband interference are studied in the article.
The Self-Adaptive Fuzzy PID Controller in Actuator Simulated Loading System
Chuanhui Zhang
2013-05-01
Full Text Available This paper analyzes the structure principle of the actuator simulated loading system with variable stiffness, and establishes the simplified model. What’s more, it also does a research on the application of the self-adaptive tuning of fuzzy PID(Proportion Integration Differentiation in actuator simulated loading system with variable stiffness. Because the loading system is connected with the steering system by a spring rod, there must be strong coupling. Besides, there are also the parametric variations accompanying with the variations of the stiffness. Based on compensation from the feed-forward control on the disturbance brought by the motion of steering engine, the system performance can be improved by using fuzzy adaptive adjusting PID control to make up the changes of system parameter caused by the changes of the stiffness. By combining the fuzzy control with traditional PID control, fuzzy adaptive PID control is able to choose the parameters more properly.
Adaptive Wavelet Collocation Method for Simulation of Time Dependent Maxwell's Equations
Li, Haojun; Rieder, Andreas; Freude, Wolfgang
2012-01-01
This paper investigates an adaptive wavelet collocation time domain method for the numerical solution of Maxwell's equations. In this method a computational grid is dynamically adapted at each time step by using the wavelet decomposition of the field at that time instant. In the regions where the fields are highly localized, the method assigns more grid points; and in the regions where the fields are sparse, there will be less grid points. On the adapted grid, update schemes with high spatial order and explicit time stepping are formulated. The method has high compression rate, which substantially reduces the computational cost allowing efficient use of computational resources. This adaptive wavelet collocation method is especially suitable for simulation of guided-wave optical devices.
Simulation and Performance Analysis of Adaptive Filtering Algorithms in Noise Cancellation
Ferdouse, Lilatul; Nipa, Tamanna Haque; Jaigirdar, Fariha Tasmin
2011-01-01
Noise problems in signals have gained huge attention due to the need of noise-free output signal in numerous communication systems. The principal of adaptive noise cancellation is to acquire an estimation of the unwanted interfering signal and subtract it from the corrupted signal. Noise cancellation operation is controlled adaptively with the target of achieving improved signal to noise ratio. This paper concentrates upon the analysis of adaptive noise canceller using Recursive Least Square (RLS), Fast Transversal Recursive Least Square (FTRLS) and Gradient Adaptive Lattice (GAL) algorithms. The performance analysis of the algorithms is done based on convergence behavior, convergence time, correlation coefficients and signal to noise ratio. After comparing all the simulated results we observed that GAL performs the best in noise cancellation in terms of Correlation Coefficient, SNR and Convergence Time. RLS, FTRLS and GAL were never evaluated and compared before on their performance in noise cancellation in ...
Nakos, J.; Rosinski, S.; Acton, R.; Strait, B.; Schulze, D. [Sandia National Labs., Albuquerque, NM (United States)
1994-09-01
The objective of this paper was a review of a proof-of-principle annealing process conducted on a RV section. Test conditions and set-up were described, and photographs of the test setup were presented. Results of various temperature measurements were also presented.
Kniepert, Juliane; Lange, Ilja; van der Kaap, Niels J.; Koster, L. Jan Anton; Neher, Dieter
2014-01-01
Time-delayed collection field (TDCF) and bias-amplified charge extraction (BACE) are applied to as-prepared and annealed poly(3-hexylthiophene):[6,6]-phenyl C-71 butyric acid methyl ester (P3HT:PCBM) blends coated from chloroform. Despite large differences in fill factor, short-circuit current, and
Woo, Jihwan; Miller, Charles A; Abbas, Paul J
2009-05-01
The Hodgkin-Huxley (HH) model does not simulate the significant changes in auditory nerve fiber (ANF) responses to sustained stimulation that are associated with neural adaptation. Given that the electric stimuli used by cochlear prostheses can result in adapted responses, a computational model incorporating an adaptation process is warranted if such models are to remain relevant and contribute to related research efforts. In this paper, we describe the development of a modified HH single-node model that includes potassium ion ( K(+)) concentration changes in response to each action potential. This activity-related change results in an altered resting potential, and hence, excitability. Our implementation of K(+)-related changes uses a phenomenological approach based upon K(+) accumulation and dissipation time constants. Modeled spike times were computed using repeated presentations of modeled pulse-train stimuli. Spike-rate adaptation was characterized by rate decrements and time constants and compared against ANF data from animal experiments. Responses to relatively low (250 pulse/s) and high rate (5000 pulse/s) trains were evaluated and the novel adaptation model results were compared against model results obtained without the adaptation mechanism. In addition to spike-rate changes, jitter and spike intervals were evaluated and found to change with the addition of modeled adaptation. These results provide one means of incorporating a heretofore neglected (although important) aspect of ANF responses to electric stimuli. Future studies could include evaluation of alternative versions of the adaptation model elements and broadening the model to simulate a complete axon, and eventually, a spatially realistic model of the electrically stimulated nerve within extracochlear tissues.
Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
宛剑业; 张飞超; 高丽媛; 刘卫博
2016-01-01
针对YY企业电子油门生产车间的布局规划，分别采用了传统的SLP方法和遗传模拟退火算法，并利用 Proplanner 软件对其两种方法获得的方案1、2进行了仿真研究。仿真结果表明在零部件的搬运距离、搬运时间、搬运成本三方面，方案2明显优于方案1。从而说明在车间布局规划方面，遗传模拟退火算法比SLP更具可行性与合理性。%To study the layout planning for electronic accelerator production workshop of YY company, this paper uses SLP method and the genetic simulated annealing hybrid algorithm, and the Proplanner software to conduct the simulation research of the program 1, 2 obtained on the two methods. The simulation results show that the program 2 is obviously better than program 1 in the parts of the distance of transportation, handling time, handling cost, which means that the genetic simulated annealing hybrid algorithm is more feasible and rational than the SLP in the layout of a workshop.
罗晨; 李渊; 刘勇; 刘晓明
2012-01-01
Aiming at the shortcomings of normal genetic algorithm that its convergence speed is slow in task allocation, baaed on giving the formal specification of task allocation in multi-agent system, this paper proposed a simulated annealing genetic algorithm (SAGA) by integrating simulated annealing, presented the basic thought and pivotal steps of SAGA in detail, and validated the algorithm by simulation experiment. The simulation results illustrate thai SAGA has better convergence speed and optimal results than normal genetic algorithm.%针对标准的遗传算法在任务分配中收敛速度慢的问题,对多agent系统中的任务分配进行形式化描述的基础上,融合模拟退火算法的优化思想,提出了一种基于模拟退火遗传算法的任务分配方法,详细阐述了该算法的基本思想和关键步骤,并通过仿真实验进行验证.仿真实验结果表明,基于模拟退火遗传算法比标准的遗传算法具有更快的收敛速度和寻优效果.
Control of suspended low-gravity simulation system based on self-adaptive fuzzy PID
Chen, Zhigang; Qu, Jiangang
2017-09-01
In this paper, an active suspended low-gravity simulation system is proposed to follow the vertical motion of the spacecraft. Firstly, working principle and mathematical model of the low-gravity simulation system are shown. In order to establish the balance process and suppress the strong position interference of the system, the idea of self-adaptive fuzzy PID control strategy is proposed. It combines the PID controller with a fuzzy controll strategy, the control system can be automatically adjusted by changing the proportional parameter, integral parameter and differential parameter of the controller in real-time. At last, we use the Simulink tools to verify the performance of the controller. The results show that the system can reach balanced state quickly without overshoot and oscillation by the method of the self-adaptive fuzzy PID, and follow the speed of 3m/s, while simulation degree of accuracy of system can reach to 95.9% or more.
Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses
Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)
2014-10-15
Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.
张扬; 杨松涛; 张香芝
2012-01-01
研究无线传感器网络( WSN)数据融合技术.传感器节点计算能力、通信能力有限,WSN采用交叉重叠方式部署,导致冗余数据量大,需采用数据融合技术消除冗余和无效数据,节约网络通信能耗.结合遗传算法全局搜索和模拟退火算法局部搜索的优点,提出一种模拟退火遗传算法的WSN数据融合方法(SA-GA).采用模拟退火遗传算法快速找到移动代理路由最优传感器节点序列,并实现数据融合.仿真实验结果表明,与遗传算法、模拟退火算法相比,SA-GA更能快速找到全局最优数据融合节点序列,并对数据进行有效融合,具有更小的网络能耗和网络延时.%This paper researched the wireless sensor network ( WSN) data fusion. Sensor node computing ability and communication ability were limited. WSN used overlapping deployment, leading to large redundant data quantity, so as to use the data fusion technology to eliminate redundancy and invalid data, save network communication energy. Combination of genetic algorithm and simulated annealing algorithm for global search and local search advantages, this paper proposed a simulated annealing genetic algorithm ( SA-GA ) WSN data fusion method. By using simulated annealing genetic algorithm, it could quickly find the mobile agent routing optimal sensor node sequence and fuse the data. The simulation results show that, comparing with the genetic algorithm and simulated annealing algorithm, SA-GA can quickly find optimal data fusion node sequence, integrate the data effectively, and it has smaller energy consumption of the network and network delay.
Pawlik, Andreas H; Vecchia, Claudio Dalla
2015-01-01
We present a suite of cosmological radiation-hydrodynamical simulations of the assembly of galaxies driving the reionization of the intergalactic medium (IGM) at z >~ 6. The simulations account for the hydrodynamical feedback from photoionization heating and the explosion of massive stars as supernovae (SNe). Our reference simulation, which was carried out in a box of size 25 comoving Mpc/h using 2 x 512^3 particles, produces a reasonable reionization history and matches the observed UV luminosity function of galaxies. Simulations with different box sizes and resolutions are used to investigate numerical convergence, and simulations in which either SNe or photoionization heating or both are turned off, are used to investigate the role of feedback from star formation. Ionizing radiation is treated using accurate radiative transfer at the high spatially adaptive resolution at which the hydrodynamics is carried out. SN feedback strongly reduces the star formation rates (SFRs) over nearly the full mass range of s...
Analysis and simulation of aperture-sizing strategies with partial adaptive optics
Tyson, Robert K.
1994-05-01
The central core intensity of a stellar image observed by a ground-based telescope can be maximized by a judicious balancing of the adaptive optics system and the size of the telescope entrance aperture. For a given aperture, increasing the number of degrees of adaptive optics turbulence compensation will maximize the brightness of the central core. However, for an observatory using an adaptive optics system with a fixed number of degrees-of-freedom, the largest aperture available will not necessarily result in a maximized image central core. The negative effects of atmospheric turbulence, roughly proportional to e(superscript -(D/r(subscript o))(superscript 5/3)), cannot always be compensated by the increased light gathering ability of a larger aperture (proportional to D(superscript 2)). It is shown and verified through simulation that the optimum aperture diameter is a function of N(superscript p) r(subscript o) where N is the number of adaptive optics degrees of freedom and r(subscript o) is the seeing cell size. The simulations show that the exponent p is related to the control algorithm or, more precisely, the figure-of-merit used to drive the deformable mirror actuators. Optimizing the useful aperture of the telescope/adaptive optics system is a strategy that can make use of the variation in site seeing conditions and benefit the astronomer by increasing the available number of observable science objects or reducing the observing time.
Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko
2002-11-01
We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.
Adaptive finite element simulation of flow and transport applications on parallel computers
Kirk, Benjamin Shelton
The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the
Xia, Peng; Hu, Jie; Peng, Yinghong
2015-12-01
Retinal prostheses for the restoration of functional vision are under development and visual prostheses targeting proximal stages of the visual pathway are also being explored. To investigate the experience with visual prostheses, psychophysical experiments using simulated prosthetic vision in normally sighted individuals are necessary. In this study, a helmet display with real-time images from a camera attached to the helmet provided the simulated vision, and experiments of recognition and discriminating multiple objects were used to evaluate visual performance under different parameters (gray scale, distortion, and dropout). The process of fitting and training with visual prostheses was simulated and estimated by adaptation to the parameters with time. The results showed that the increase in the number of gray scale and the decrease in phosphene distortion and dropout rate improved recognition performance significantly, and the recognition accuracy was 61.8 ± 7.6% under the optimum condition (gray scale: 8, distortion: k = 0, dropout: 0%). The adaption experiments indicated that the recognition performance was improved with time and the effect of adaptation to distortion was greater than dropout, which implies the difference of adaptation mechanism to the two parameters.
路鹏; 丛晓; 周东岱
2013-01-01
With the application of artificial intelligence techniques in the field of educational evalua-tion, the computerized adaptive testing gradually becomes one of the most important educational evaluation methods.In such test, the computer can dynamically update the ability level of the learn-er and select tailored questions from the examination questions bank .It is required that the system has a relatively high efficiency of the implementation in order to meet the needs of the test .To solve this problem , the intelligent questions system based on simulated annealing algorithm is proposed . The experimental results show that while the method can ensure the selection of nearly optimal ques -tions from the examination questions bank for learners , it also greatly improve the efficiency of choo-sing questions from the system .%随着人工智能技术在教育评价领域中的应用，计算机自适应测试逐渐成为一种重要的教育评价方式。采用这种测试形式，计算机实时的对学习者的能力水平进行动态更新并从题库中为其选择量身定制的试题，这就要求系统具有比较高的执行效率，才能满足实际应用的需要。为了解决这个问题，提出了基于模拟退火算法来构建智能试题产生系统的方法。实验结果表明，该方法在保证从题库中为学习者选择接近最优试题的同时，也极大提高了系统的选题效率。
Childers, J T; LeCompte, T J; Papka, M E; Benjamin, D P
2015-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Sahni, Onkar; Jansen, Kenneth; Shephard, Mark; Taylor, Charles
2007-11-01
Flow within the healthy human vascular system is typically laminar but diseased conditions can alter the geometry sufficiently to produce transitional/turbulent flows in regions focal (and immediately downstream) of the diseased section. The mean unsteadiness (pulsatile or respiratory cycle) further complicates the situation making traditional turbulence simulation techniques (e.g., Reynolds-averaged Navier-Stokes simulations (RANSS)) suspect. At the other extreme, direct numerical simulation (DNS) while fully appropriate can lead to large computational expense, particularly when the simulations must be done quickly since they are intended to affect the outcome of a medical treatment (e.g., virtual surgical planning). To produce simulations in a clinically relevant time frame requires; 1) adaptive meshing technique that closely matches the desired local mesh resolution in all three directions to the highly anisotropic physical length scales in the flow, 2) efficient solution algorithms, and 3) excellent scaling on massively parallel computers. In this presentation we will demonstrate results for a subject-specific simulation of an abdominal aortic aneurysm using stabilized finite element method on anisotropically adapted meshes consisting of O(10^8) elements over O(10^4) processors.
3D design and electric simulation of a silicon drift detector using a spiral biasing adapter
Li, Yu-yun; Xiong, Bo; Li, Zheng
2016-09-01
The detector system of combining a spiral biasing adapter (SBA) with a silicon drift detector (SBA-SDD) is largely different from the traditional silicon drift detector (SDD), including the spiral SDD. It has a spiral biasing adapter of the same design as a traditional spiral SDD and an SDD with concentric rings having the same radius. Compared with the traditional spiral SDD, the SBA-SDD separates the spiral's functions of biasing adapter and the p-n junction definition. In this paper, the SBA-SDD is simulated using a Sentaurus TCAD tool, which is a full 3D device simulation tool. The simulated electric characteristics include electric potential, electric field, electron concentration, and single event effect. Because of the special design of the SBA-SDD, the SBA can generate an optimum drift electric field in the SDD, comparable with the conventional spiral SDD, while the SDD can be designed with concentric rings to reduce surface area. Also the current and heat generated in the SBA are separated from the SDD. To study the single event response, we simulated the induced current caused by incident heavy ions (20 and 50 μm penetration length) with different linear energy transfer (LET). The SBA-SDD can be used just like a conventional SDD, such as X-ray detector for energy spectroscopy and imaging, etc.
Simulation of Old Urban Residential Area Evolution Based on Complex Adaptive System
YANG Fan; WANG Xiao-ming; HUA Hong
2009-01-01
On the basis of complex adaptive system theory,this paper proposed an agent-based model of old urban residential area,in which,residents and providers are the two adaptive agents.The behaviors of residents and providers in this model are trained with back propagation and simulated with Swarm software based on environment-rules-agents interaction.This model simulates the evolution of old urban residential area and analyzes the relations between the evolution and urban management with the background of Chaozhou city.As a result,the following are obtained:(1) Simulation without government intervention indicates the trend of housing ageing,environmental deterioration,economic depression,and social filtering-down in old urban residential area.If the development of old urban residential area is under control of developers in market,whose desire is profit maximization,and factors such as social justice,historic and culture value will be ignored.(2) If the government carries out some policies and measures which will perfectly serve their original aims,simulation reveals that old urban residential area could be adapted to environment and keep sustainable development.This conclusion emphasizes that government must act as initiator and program maker for guiding residents and other providers directly in the development of old urban residential area.
Görbil, Gökçe; Gelenbe, Erol
The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
Robert eBauer
2015-02-01
Full Text Available Restorative brain-computer interfaces (BCI are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation.In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
朱均燕; 温永仙
2013-01-01
在传统的模拟退火算法基础上,对于产生新解边界值的处理给出一种新方法,并将它应用到二维Toy模型.对4条Fibonacci序列进行了结构预测,结果表明该算法可行有效.%A new method for the treatment of new border values on the basis of traditional simulated annealing algorithm was proposed , and it was applied to Toy model. The structure of four Fibonacci sequences was predicted, the results showed that the algorithm was feasible and effective.
覃德泽
2011-01-01
提出一种基于模拟退火的优化算法来解决路由问题.模拟退火算法以加权累积期望传输时间为代价函数来寻找最佳路由方式.系统仿真基于802.11无线网络,分别比较使用基于模拟退火的路由算法和最短路由算法情况下的网络吞吐量和丢包率.仿真结果显示,基于模拟退火的路由算法比最短路由算法具有更好的性能.%An optimization algorithm based on simulated annealing was proposed to solve the routing problem. The simulated annealing (SA) algorithm looked for the best routing strategy by taking the weighted cumulative expectations transmission time as the cost function. The simulation that operated based on 802. 11 wireless networks compared the network throughput and the packet loss ratio by using SA-based routing algorithm and the shortest path routing strategy respectively. The results show that SA-based routing algorithm had a better performance than that of the shortest path routing algorithm.
王家文; 王岩; 陈前; 李伟; 陈钰青; 靳书岩; 牛伟; 陈凤霞
2014-01-01
以热模拟实验为基础，建立固溶态GH4169合金的动态再结晶模型，应用DEFORM-3D有限元软件模拟圆柱状试样在不同压缩变形条件下的动态再结晶体积分数分布；结合金相定量分析、电子背散射衍射(Electron backsatter diffraction (EBSD))分析及有限元模拟结果，对比研究变形参数对圆柱状GH4169合金心部微观组织的影响。研究结果表明：升高变形温度及降低应变速率，均可促进圆柱状GH4169合金热模拟压缩试样变形的均匀性；应变速率的降低可加速GH4169合金中小角度晶界向大角度晶界的转变过程；GH4169合金的动态再结晶形核机制为以原始晶界为主的非连续动态再结晶，在试验变形条件下，孪晶界的演化对动态再结晶过程起重要作用；同时，分析实验结果与模拟结果之间的差异及其原因。%Dynamic recrystallization (DRX) model of the annealed GH4169 alloy was established based on the thermal-mechanical simulation tests. The finite element analysis software DEFORM-3D was introduced to simulating the DRX volume of the cylindrical annealed GH4169 alloy under different deformation conditions. Combined quantitative metallographic analysis, electron backscatter diffraction (EBSD) analysis with finite element analysis, the effects of the deformation parameters on the microstructures of the center for the cylindrical samples were investigated. The results show that increasing the deformation temperature and lowering the strain rate would promote the deformation homogeneity of the cylindrical samples during thermal-mechanical simulation tests. The transformation procedure of grain boundaries with low angles and with high angles is accelerated with decreasing the strain rate. The nucleation mechanism of the dynamic recrystallization for the alloy is the discontinuous one dominated mainly by the bulging of the original grain boundaries. Under the tested conditions, the evolution of
Validation Through Simulations of a Cn2 Profiler for the ESO/VLT Adaptive Optics Facility
Garcia-Rissmann, A; Kolb, J; Louarn, M Le; Madec, P -Y; Neichel, B
2015-01-01
The Adaptive Optics Facility (AOF) project envisages transforming one of the VLT units into an adaptive telescope and providing its ESO (European Southern Observatory) second generation instruments with turbulence corrected wavefronts. For MUSE and HAWK-I this correction will be achieved through the GALACSI and GRAAL AO modules working in conjunction with a 1170 actuators Deformable Secondary Mirror (DSM) and the new Laser Guide Star Facility (4LGSF). Multiple wavefront sensors will enable GLAO and LTAO capabilities, whose performance can greatly benefit from a knowledge about the stratification of the turbulence in the atmosphere. This work, totally based on end-to-end simulations, describes the validation tests conducted on a Cn2 profiler adapted for the AOF specifications. Because an absolute profile calibration is strongly dependent on a reliable knowledge of turbulence parameters r0 and L0, the tests presented here refer only to normalized output profiles. Uncertainties in the input parameters inherent t...
Simulating computer adaptive testing with the Mood and Anxiety Symptom Questionnaire.
Flens, Gerard; Smits, Niels; Carlier, Ingrid; van Hemert, Albert M; de Beurs, Edwin
2016-08-01
In a post hoc simulation study (N = 3,597 psychiatric outpatients), we investigated whether the efficiency of the 90-item Mood and Anxiety Symptom Questionnaire (MASQ) could be improved for assessing clinical subjects with computerized adaptive testing (CAT). A CAT simulation was performed on each of the 3 MASQ subscales (Positive Affect, Negative Affect, and Somatic Anxiety). With the CAT simulation's stopping rule set at a high level of measurement precision, the results showed that patients' test administration can be shortened substantially; the mean decrease in items used for the subscales ranged from 56% up to 74%. Furthermore, the predictive utility of the CAT simulations was sufficient for all MASQ scales. The findings reveal that developing a MASQ CAT for clinical subjects is useful as it leads to more efficient measurement without compromising the reliability of the test outcomes. (PsycINFO Database Record
Ostermeir, Katja; Zacharias, Martin
2014-01-15
A Hamiltonian Replica-Exchange Molecular Dynamics (REMD) simulation method has been developed that employs a two-dimensional backbone and one-dimensional side chain biasing potential specifically to promote conformational transitions in peptides. To exploit the replica framework optimally, the level of the biasing potential in each replica was appropriately adapted during the simulations. This resulted in both high exchange rates between neighboring replicas and improved occupancy/flow of all conformers in each replica. The performance of the approach was tested on several peptide and protein systems and compared with regular MD simulations and previous REMD studies. Improved sampling of relevant conformational states was observed for unrestrained protein and peptide folding simulations as well as for refinement of a loop structure with restricted mobility of loop flanking protein regions.
Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries
Deiterding, Ralf; Wood, Stephen L.
2016-09-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.
Scale-adaptive simulation of a hot jet in cross flow
Duda, B M; Esteve, M-J [AIRBUS Operations S.A.S., Toulouse (France); Menter, F R; Hansen, T, E-mail: benjamin.duda@airbus.com [ANSYS Germany GmbH, Otterfing (Germany)
2011-12-22
The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.
Scale-adaptive simulation of a hot jet in cross flow
Duda, B. M.; Menter, F. R.; Hansen, T.; Esteve, M.-J.
2011-12-01
The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.
Multi-GPU adaptation of a simulator of heart electric activity
Víctor M. García
2013-12-01
Full Text Available The simulation of the electrical activity of the heart is calculated by solving a large system of ordinary differential equations; this takes an enormous amount of computation time. In recent years graphics processing unit (GPU are being introduced in the field of high performance computing. These powerful computing devices have attracted research groups requiring simulate the electrical activity of the heart. The research group signing this paper has developed a simulator of cardiac electrical activity that runs on a single GPU. This article describes the adaptation and modification of the simulator to run on multiple GPU. The results confirm that the technique significantly reduces the execution time compared to those obtained with a single GPU, and allows the solution of larger problems.
Leo, Jennifer; Goodwin, Donna
2014-04-01
Disability simulations have been used as a pedagogical tool to simulate the functional and cultural experiences of disability. Despite their widespread application, disagreement about their ethical use, value, and efficacy persists. The purpose of this study was to understand how postsecondary kinesiology students experienced participation in disability simulations. An interpretative phenomenological approach guided the study's collection of journal entries and clarifying one-on-one interviews with four female undergraduate students enrolled in a required adapted physical activity course. The data were analyzed thematically and interpreted using the conceptual framework of situated learning. Three themes transpired: unnerving visibility, negotiating environments differently, and tomorrow I'll be fine. The students described emotional responses to the use of wheelchairs as disability artifacts, developed awareness of environmental barriers to culturally and socially normative activities, and moderated their discomfort with the knowledge they could end the simulation at any time.
GLAMER Part I: A Code for Gravitational Lensing Simulations with Adaptive Mesh Refinement
Metcalf, R Benton
2013-01-01
A computer code is described for the simulation of gravitational lensing data. The code incorporates adaptive mesh refinement in choosing which rays to shoot based on the requirements of the source size, location and surface brightness distribution or to find critical curves/caustics. A variety of source surface brightness models are implemented to represent galaxies and quasar emission regions. The lensing mass can be represented by point masses (stars), smoothed simulation particles, analytic halo models, pixelized mass maps or any combination of these. The deflection and beam distortions (convergence and shear) are calculated by modified tree algorithm when halos, point masses or particles are used and by FFT when mass maps are used. The combination of these methods allow for a very large dynamical range to be represented in a single simulation. Individual images of galaxies can be represented in a simulation that covers many square degrees. For an individual strongly lensed quasar, source sizes from the s...
End to end numerical simulations of the MAORY multiconjugate adaptive optics system
Arcidiacono, Carmelo; Bregoli, Giovanni; Diolaiti, Emiliano; Foppiani, Italo; Cosentino, Giuseppe; Lombini, Matteo; Butler, R C; Ciliegi, Paolo
2014-01-01
MAORY is the adaptive optics module of the E-ELT that will feed the MICADO imaging camera through a gravity invariant exit port. MAORY has been foreseen to implement MCAO correction through three high order deformable mirrors driven by the reference signals of six Laser Guide Stars (LGSs) feeding as many Shack-Hartmann Wavefront Sensors. A three Natural Guide Stars (NGSs) system will provide the low order correction. We develop a code for the end-to-end simulation of the MAORY adaptive optics (AO) system in order to obtain high-delity modeling of the system performance. It is based on the IDL language and makes extensively uses of the GPUs. Here we present the architecture of the simulation tool and its achieved and expected performance.
Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain
2010-06-01
This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.
Buntemeyer, Lars; Peters, Thomas; Klassen, Mikhail; Pudritz, Ralph E
2015-01-01
We present an algorithm for solving the radiative transfer problem on massively parallel computers using adaptive mesh refinement and domain decomposition. The solver is based on the method of characteristics which requires an adaptive raytracer that integrates the equation of radiative transfer. The radiation field is split into local and global components which are handled separately to overcome the non-locality problem. The solver is implemented in the framework of the magneto-hydrodynamics code FLASH and is coupled by an operator splitting step. The goal is the study of radiation in the context of star formation simulations with a focus on early disc formation and evolution. This requires a proper treatment of radiation physics that covers both the optically thin as well as the optically thick regimes and the transition region in particular. We successfully show the accuracy and feasibility of our method in a series of standard radiative transfer problems and two 3D collapse simulations resembling the ear...
Simulation of macromolecular liquids with the adaptive resolution molecular dynamics technique
Peters, J. H.; Klein, R.; Delle Site, L.
2016-08-01
We extend the application of the adaptive resolution technique (AdResS) to liquid systems composed of alkane chains of different lengths. The aim of the study is to develop and test the modifications of AdResS required in order to handle the change of representation of large molecules. The robustness of the approach is shown by calculating several relevant structural properties and comparing them with the results of full atomistic simulations. The extended scheme represents a robust prototype for the simulation of macromolecular systems of interest in several fields, from material science to biophysics.
Kozdon, J. E.; Wilcox, L.; Aranda, A. R.
2014-12-01
The goal of this work is to develop a new set of simulation tools for earthquake rupture dynamics based on state-of-the-art high-order, adaptive numerical methods capable of handling complex geometries. High-order methods are ideal for earthquake rupture simulations as the problems are wave-dominated and the waves excited in simulations propagate over distance much larger than their fundamental wavelength. When high-order methods are used for such problems significantly fewer degrees of freedom are required as compared with low-order methods. The base numerical method in our new software elements is a discontinuous Galerkin method based on curved, Kronecker product hexahedral elements. We currently use MPI for off-node parallelism and are in the process of exploring strategies for on-node parallelism. Spatial mesh adaptivity is handled using the p4est library and temporal adaptivity is achieved through an Adams-Bashforth based local time stepping method; we are presently in the process of including dynamic spatial adaptivity which we believe will be valuable for capturing the small-scale features around the propagating rupture front. One of the key features of our software elements is that the method is provably stable, even after the inclusion of the nonlinear frictions laws which govern rupture dynamics. In this presentation we will both outline the structure of the software elements as well as validate the rupture dynamics with SCEC benchmark test problems. We are also presently developing several realistic simulation geometries which may also be reported on. Finally, the software elements that we have designed are fully public domain and have been designed with tightly coupled, wave dominated multiphysics applications in mind. This latter design decisions means the software elements are applicable to many other geophysical and non-geophysical applications.
Adaptive particle-cell algorithm for Fokker-Planck based rarefied gas flow simulations
Pfeiffer, M.; Gorji, M. H.
2017-04-01
Recently, the Fokker-Planck (FP) kinetic model has been devised on the basis of the Boltzmann equation (Jenny et al., 2010; Gorji et al., 2011). Particle Monte-Carlo schemes are then introduced for simulations of rarefied gas flows based on the FP kinetics. Here the particles follow independent stochastic paths and thus a spatio-temporal resolution coarser than the collisional scales becomes possible. In contrast to the direct simulation Monte-Carlo (DSMC), the computational cost is independent of the Knudsen number resulting in efficient simulations at moderate/low Knudsen flows. In order to further exploit the efficiency of the FP method, the required particle-cell resolutions should be found, and a cell refinement strategy has to be developed accordingly. In this study, an adaptive particle-cell scheme applicable to a general unstructured mesh is derived for the FP model. Virtual sub cells are introduced for the adaptive mesh refinement. Moreover a sub cell-merging algorithm is provided to honor the minimum required number of particles per cell. For assessments, the 70 degree blunted cone reentry flow (Allgre et al., 1997) is studied. Excellent agreement between the introduced adaptive FP method and DSMC is achieved.
Yao Jianyong; Jiao Zongxia; Han Songshan
2013-01-01
Low-velocity tracking capability is a key performance of flight motion simulator (FMS),which is mainly affected by the nonlinear friction force.Though many compensation schemes with ad hoc friction models have been proposed,this paper deals with low-velocity control without friction model,since it is easy to be implemented in practice.Firstly,a nonlinear model of the FMS middle frame,which is driven by a hydraulic rotary actuator,is built.Noting that in the low velocity region,the unmodeled friction force is mainly characterized by a changing-slowly part,thus a simple adaptive law can be employed to learn this changing-slowly part and compensate it.To guarantee the boundedness of adaptation process,a discontinuous projection is utilized and then a robust scheme is proposed.The controller achieves a prescribed output tracking transient performance and final tracking accuracy in general while obtaining asymptotic output tracking in the absence of modeling errors.In addition,a saturated projection adaptive scheme is proposed to improve the globally learning capability when the velocity becomes large,which might make the previous proposed projection-based adaptive law be unstable.Theoretical and extensive experimental results are obtained to verify the high-performance nature of the proposed adaptive robust control strategy.
Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations
Calo, Victor M.
2011-05-14
In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.
Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.
2016-10-01
In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.
Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej
2016-08-09
The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to supramolecular CG models, where several molecules are represented as a single CG bead, is challenging, but it provides higher computational gains and connection to the established MARTINI CG force field. Difficulties that arise from such coupling have been so far bypassed with bundled AT water models, where additional harmonic bonds between oxygen atoms within a given supramolecular water bundle are introduced. While these models simplify the supramolecular coupling, they also cause in certain situations spurious artifacts, such as partial unfolding of biomolecules. In this work, we present a new clustering algorithm SWINGER that can concurrently make, break, and remake water bundles and in conjunction with the AdResS permits the use of original AT water models. We apply our approach to simulate a hybrid SPC/MARTINI water system and show that the essential properties of water are correctly reproduced with respect to the standard monoscale simulations. The developed hybrid water model can be used in biomolecular simulations, where a significant speed up can be obtained without compromising the accuracy of the AT water model.
Two-stage re-estimation adaptive design: a simulation study
Francesca Galli
2013-10-01
Full Text Available Background: adaptive clinical trial design has been proposed as a promising new approach to improve the drug discovery process. Among the many options available, adaptive sample size re-estimation is of great interest mainly because of its ability to avoid a large ‘up-front’ commitment of resources. In this simulation study, we investigate the statistical properties of two-stage sample size re-estimation designs in terms of type I error control, study power and sample size, in comparison with the fixed-sample study.Methods: we simulated a balanced two-arm trial aimed at comparing two means of normally distributed data, using the inverse normal method to combine the results of each stage, and considering scenarios jointly defined by the following factors: the sample size re-estimation method, the information fraction, the type of group sequential boundaries and the use of futility stopping. Calculations were performed using the statistical software SAS™ (version 9.2.Results: under the null hypothesis, any type of adaptive design considered maintained the prefixed type I error rate, but futility stopping was required to avoid the unwanted increase in sample size. When deviating from the null hypothesis, the gain in power usually achieved with the adaptive design and its performance in terms of sample size were influenced by the specific design options considered.Conclusions: we show that adaptive designs incorporating futility stopping, a sufficiently high information fraction (50-70% and the conditional power method for sample size re-estimation have good statistical properties, which include a gain in power when trial results are less favourable than anticipated.
Simulation of tsunamis generated by landslides using adaptive mesh refinement on GPU
de la Asunción, M.; Castro, M. J.
2017-09-01
Adaptive mesh refinement (AMR) is a widely used technique to accelerate computationally intensive simulations, which consists of dynamically increasing the spatial resolution of the areas of interest of the domain as the simulation advances. During the last years there have appeared many publications that tackle the implementation of AMR-based applications in GPUs in order to take advantage of their massively parallel architecture. In this paper we present the first AMR-based application implemented on GPU for the simulation of tsunamis generated by landslides by using a two-layer shallow water system. We also propose a new strategy for the interpolation and projection of the values of the fine cells in the AMR algorithm based on the fluctuations of the state values instead of the usual approach of considering the current state values. Numerical experiments on artificial and realistic problems show the validity and efficiency of the solver.
Numerical simulation of azimuth electromagnetic wave tool response based on self-adaptive FEM
Li, Hui; Shen, Yi-Ze
2017-07-01
Azimuth electromagnetic wave is a new type of electromagnetic prospecting technology. It can detect weak electromagnetic wave signal and realize real-time formation conductivity imaging. For effectively optimizing measurement accuracy of azimuth electromagnetic wave imaging tool, the efficient numerical simulation algorithm is required. In this paper, self-adaptive finite element method (FEM) has been used to investigate the azimuth electromagnetic wave logging tool response by adjusting antenna array system in different geological conditions. Numerical simulation examples show the accuracy and efficiency of the method, and provide physical interpretation of amplitude attenuation and phase shift of electromagnetic wave signal. Meanwhile, the high-accuracy numerical simulation results have great value to azimuth electromagnetic wave imaging tool calibration and data interpretation.
Effectively explore metastable states of proteins by adaptive nonequilibrium driving simulations
Wan, Biao; Xu, Shun; Zhou, Xin
2017-03-01
Nonequilibrium drivings applied in molecular dynamics (MD) simulations can efficiently extend the visiting range of protein conformations, but might compel systems to go far away from equilibrium and thus mainly explore irrelevant conformations. Here we propose a general method, called adaptive nonequilibrium simulation (ANES), to automatically adjust the external driving on the fly, based on the feedback of the short-time average response of system. Thus, the ANES approximately keeps the local equilibrium but efficiently accelerates the global motion. We illustrate the capability of the ANES in highly efficiently exploring metastable conformations in the deca-alanine peptide and find that the 0.2 -μ s ANES approximately captures the important states and folding and unfolding pathways in the HP35 solution by comparing with the result of the recent 398 -μ s equilibrium MD simulation on Anton [S. Piana et al., Proc. Natl. Acad. Sci. USA 109, 17845 (2012), 10.1073/pnas.1201811109].
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
Online body schema adaptation based on internal mental simulation and multisensory feedback
Pedro eVicente
2016-03-01
Full Text Available In this paper, we describe a novel approach to obtain automatic adaptation of the robot body schema and to improve the robot perceptual and motor skills based on this body knowledge. Predictions obtained through a mental simulation of the body are combined with the real sensory feedback to achieve two objectives simultaneously: body schema adaptation and markerless 6D hand pose estimation. The body schema consists of a computer graphics simulation of the robot, which includes the arm and head kinematics (adapted online during the movements and an appearance model of the hand shape and texture. The mental simulation process generates predictions on how the hand will appear in the robot camera images, based on the body schema and the proprioceptive information (i.e. motor encoders. These predictions are compared to the actual images using Sequential Monte Carlo techniques to feed a particle-based Bayesian estimation method to estimate the parameters of the body schema. The updated body schema will improve the estimates of the 6D hand pose, which is thenused in a closed-loop control scheme (i.e. visual servoing, enabling precise reaching. We report experiments with the iCub humanoid robot that support the validity of our approach. A number of simulations with precise ground-truth were performed to evaluate the estimation capabilities of the proposed framework. Then, we show how the use of high-performance GPU programming and an edge-based algorithm for visual perception allow for real-time implementation in real world scenarios.
A Hamiltonian theory of adaptive resolution simulations of classical and quantum models of nuclei
Kreis, Karsten; Donadio, Davide; Kremer, Kurt; Potestio, Raffaello
2015-03-01
Quantum delocalization of atomic nuclei strongly affects the physical properties of low temperature systems, such as superfluid helium. However, also at room temperature nuclear quantum effects can play an important role for molecules composed by light atoms. An accurate modeling of these effects is possible making use of the Path Integral formulation of Quantum Mechanics. In simulations, this numerically expensive description can be restricted to a small region of space, while modeling the remaining atoms as classical particles. In this way the computational resources required can be significantly reduced. In the present talk we demonstrate the derivation of a Hamiltonian formulation for a bottom-up, theoretically solid coupling between a classical model and a Path Integral description of the same system. The coupling between the two models is established with the so-called Hamiltonian Adaptive Resolution Scheme, resulting in a fully adaptive setup in which molecules can freely diffuse across the classical and the Path Integral regions by smoothly switching their description on the fly. Finally, we show the validation of the approach by means of adaptive resolution simulations of low temperature parahydrogen. Graduate School Materials Science in Mainz, Staudinger Weg 9, 55128 Mainz, Germany.
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Joshua Rodewald
2016-10-01
Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.
Jana Hoymann
2016-06-01
Full Text Available Decision-makers in the fields of urban and regional planning in Germany face new challenges. High rates of urban sprawl need to be reduced by increased inner-urban development while settlements have to adapt to climate change and contribute to the reduction of greenhouse gas emissions at the same time. In this study, we analyze conflicts in the management of urban areas and develop integrated sustainable land use strategies for Germany. The spatial explicit land use change model Land Use Scanner is used to simulate alternative scenarios of land use change for Germany for 2030. A multi-criteria analysis is set up based on these scenarios and based on a set of indicators. They are used to measure whether the mitigation and adaptation objectives can be achieved and to uncover conflicts between these aims. The results show that the built-up and transport area development can be influenced both in terms of magnitude and spatial distribution to contribute to climate change mitigation and adaptation. Strengthening the inner-urban development is particularly effective in terms of reducing built-up and transport area development. It is possible to reduce built-up and transport area development to approximately 30 ha per day in 2030, which matches the sustainability objective of the German Federal Government for the year 2020. In the case of adaptation to climate change, the inclusion of extreme flood events in the context of spatial planning requirements may contribute to a reduction of the damage potential.
Kreis, K.; Fogarty, A. C.; Kremer, K.; Potestio, R.
2015-09-01
In adaptive resolution simulations, molecular fluids are modeled employing different levels of resolution in different subregions of the system. When traveling from one region to the other, particles change their resolution on the fly. One of the main advantages of such approaches is the computational efficiency gained in the coarse-grained region. In this respect the best coarse-grained system to employ in the low resolution region would be the ideal gas, making intermolecular force calculations in the coarse-grained subdomain redundant. In this case, however, a smooth coupling is challenging due to the high energetic imbalance between typical liquids and a system of non-interacting particles. In the present work, we investigate this approach, using as a test case the most biologically relevant fluid, water. We demonstrate that a successful coupling of water to the ideal gas can be achieved with current adaptive resolution methods, and discuss the issues that remain to be addressed.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca [Department of Mathematics, Ryerson University, Toronto, ON, M5B 2K3 (Canada)
2016-03-15
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
Quirós-Pacheco, Fernando; Agapito, Guido; Riccardi, Armando; Esposito, Simone; Le Louarn, Miska; Marchetti, Enrico
2012-07-01
This paper presents the performance analysis based on numerical simulations of the Pyramid Wavefront sensor Module (PWM) to be included in ERIS, the new Adaptive Optics (AO) instrument for the Adaptive Optics Facility (AOF). We have analyzed the performance of the PWM working either in a low-order or in a high-order wavefront sensing mode of operation. We show that the PWM in the high-order sensing mode can provide SR > 90% in K band using bright guide stars under median seeing conditions (0.85 arcsec seeing and 15 m/s of wind speed). In the low-order sensing mode, the PWM can sense and correct Tip-Tilt (and if requested also Focus mode) with the precision required to assist the LGS observations to get an SR > 60% and > 20% in K band, using up to a ~16.5 and ~19.5 R-magnitude guide star, respectively.
Adaptive and Iterative Methods for Simulations of Nanopores with the PNP-Stokes Equations
Mitscha-Baude, Gregor; Tulzer, Gerhard; Heitzinger, Clemens
2016-01-01
We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equ...
Long-time simulations of the Kelvin-Helmholtz instability using an adaptive vortex method.
Sohn, Sung-Ik; Yoon, Daeki; Hwang, Woonjae
2010-10-01
The nonlinear evolution of an interface subject to a parallel shear flow is studied by the vortex sheet model. We perform long-time computations for the vortex sheet in density-stratified fluids by using the point vortex method and investigate late-time dynamics of the Kelvin-Helmholtz instability. We apply an adaptive point insertion procedure and a high-order shock-capturing scheme to the vortex method to handle the nonuniform distribution of point vortices and enhance the resolution. Our adaptive vortex method successfully simulates chaotically distorted interfaces of the Kelvin-Helmholtz instability with fine resolutions. The numerical results show that the Kelvin-Helmholtz instability evolves a secondary instability at a late time, distorting the internal rollup, and eventually develops to a disordered structure.
Simulation and analysis of a Truck Model's ride comfort based on fuzzy adaptive control theory
JIANG Li-biao; WANG Deng-feng; NI Qiang; TAN Wei-ming
2007-01-01
This paper tried to analyse and verify the fuzzy adaptive control strategy of electronic control air suspension system for heavy truck. Created the seven-freedoms vehicle suspension model, and the road input model; with Matlab/Simulink toolboxes and modules, built dynamical system simulation model for heavy truck with air suspension, fuzzy adaptive control model, height control model for air spring, and intelligent control and analyse on root mean square value of acceleration of gravity center of the vehicle under excitation of road. Results show that the fuzzy control had less help to the body vibration on the better pavement, but had the better benefit on the bad road, and the vehicle's root mean square value of acceleration of gravity center is less than passive suspension's obviously.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
Adaptations to isolated shoulder fatigue during simulated repetitive work. Part I: Fatigue.
Tse, Calvin T F; McDonald, Alison C; Keir, Peter J
2016-08-01
Upper extremity muscle fatigue is challenging to identify during industrial tasks and places changing demands on the shoulder complex that are not fully understood. The purpose of this investigation was to examine adaptation strategies in response to isolated anterior deltoid muscle fatigue while performing simulated repetitive work. Participants completed two blocks of simulated repetitive work separated by an anterior deltoid fatigue protocol; the first block had 20 work cycles and the post-fatigue block had 60 cycles. Each work cycle was 60s in duration and included 4 tasks: handle pull, cap rotation, drill press and handle push. Surface EMG of 14 muscles and upper body kinematics were recorded. Immediately following fatigue, glenohumeral flexion strength was reduced, rating of perceived exertion scores increased and signs of muscle fatigue (increased EMG amplitude, decreased EMG frequency) were present in anterior and posterior deltoids, latissimus dorsi and serratus anterior. Along with other kinematic and muscle activity changes, scapular reorientation occurred in all of the simulated tasks and generally served to increase the width of the subacromial space. These findings suggest that immediately following fatigue people adapt by repositioning joints to maintain task performance and may also prioritize maintaining subacromial space width.
Binocular adaptive optics visual simulator: understanding the impact of aberrations on actual vision
Fernández, Enrique J.; Prieto, Pedro M.; Artal, Pablo
2010-02-01
A novel adaptive optics system is presented for the study of vision. The apparatus is capable for binocular operation. The binocular adaptive optics visual simulator permits measuring and manipulating ocular aberrations of the two eyes simultaneously. Aberrations can be corrected, or modified, while the subject performs visual testing under binocular vision. One of the most remarkable features of the apparatus consists on the use of a single correcting device, and a single wavefront sensor (Hartmann-Shack). Both the operation and the total cost of the instrument largely benefit from this attribute. The correcting device is a liquid-crystal-on-silicon (LCOS) spatial light modulator. The basic performance of the visual simulator consists in the simultaneous projection of the two eyes' pupils onto both the corrector and sensor. Examples of the potential of the apparatus for the study of the impact of the aberrations under binocular vision are presented. Measurements of contrast sensitivity with modified combinations of spherical aberration through focus are shown. Special attention was paid on the simulation of monovision, where one eye is corrected for far vision while the other is focused at near distance. The results suggest complex binocular interactions. The apparatus can be dedicated to the better understanding of the vision mechanism, which might have an important impact in developing new protocols and treatments for presbyopia. The technique and the instrument might contribute to search optimized ophthalmic corrections.
S. D. Parkinson
2014-09-01
Full Text Available High-resolution direct numerical simulations (DNSs are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier–Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.
Parkinson, S. D.; Hill, J.; Piggott, M. D.; Allison, P. A.
2014-09-01
High-resolution direct numerical simulations (DNSs) are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier-Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE) DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.
基于二进制蚁群模拟退火算法的认知引擎%Cognitive engine based on binary ant colony simulated annealing algorithm
夏龄; 冯文江
2012-01-01
在认知无线电系统中,认知引擎依据通信环境的变化和用户需求动态配置无线电工作参数.针对认知引擎中的智能优化问题,提出一种二进制蚁群模拟退火(BAC&SA)算法用于认知无线电参数优化.该算法在二进制蚁群优化(BACO)算法中引入模拟退火(SA)算法,融合了BACO的快速寻优能力和SA的概率突跳特性,能有效避免BACO容易陷入局部最优解的缺陷.仿真实验结果表明,与遗传算法(GA)和BACO算法相比,基于BAC&SA算法的认知引擎在全局搜索能力和平均适应度等方面具有明显的优势.%In cognitive radio system, cognitive engine can dynamically configure its working parameters according to the changes of communication environment and users' requirement. Intelligent optimization algorithm of cognitive engine had been studied, and a Binary Ant Colony Simulated Annealing ( BAC&SA) algorithm was proposed for parameters optimization of cognitive radio system. The new algorithm, which introduced the Simulated Annealing (SA) algorithm into the Binary Ant Colony Optimization ( BACO) algorithm, combined the rapid optimization ability of BACO with probability jumping property of SA, and effectively avoided the defect of falling into local optimization result of BACO. The simulation results show that cognitive engine based on BAC&SA algorithm has considerable advantage over GA and BACO algorithm in the global search ability and average fitness.
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)
2015-05-21
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Phase-field simulation of dendritic solidification using a full threaded tree with adaptive meshing
Yin Yajun; Zhou Jianxin; Liao Dunming; Pang Shengyong; Shen Xu
2014-01-01
Simulation of the microstructure evolution during solidification is greatly beneficial to the control of solidification microstructures. A phase-field method based on the ful threaded tree (FTT) for the simulation of casting solidification microstructure was proposed in this paper, and the structure of the ful threaded tree and the mesh refinement method was discussed. During dendritic growth in solidification, the mesh for simulation is adaptively refined at the liquid-solid interface, and coarsened in other areas. The numerical results of a three-dimension dendrite growth indicate that the phase-field method based on FTT is suitable for microstructure simulation. Most importantly, the FTT method can increase the spatial and temporal resolutions beyond the limits imposed by the available hardware compared with the conventional uniform mesh. At the simulation time of 0.03 s in this study, the computer memory used for computation is no more than 10 MB with the FTT method, while it is about 50 MB with the uniform mesh method. In addition, the proposed FTT method is more efficient in computation time when compared with the uniform mesh method. It would take about 20 h for the uniform mesh method, while only 2 h for the FTT method for computation when the solidification time is 0.17 s in this study.
Yoo, Jin-Hyeong; Murugan, Muthuvel; Wereley, Norman M.
2013-04-01
This study investigates a lumped-parameter human body model which includes lower leg in seated posture within a quarter-car model for blast injury assessment simulation. To simulate the shock acceleration of the vehicle, mine blast analysis was conducted on a generic land vehicle crew compartment (sand box) structure. For the purpose of simulating human body dynamics with non-linear parameters, a physical model of a lumped-parameter human body within a quarter car model was implemented using multi-body dynamic simulation software. For implementing the control scheme, a skyhook algorithm was made to work with the multi-body dynamic model by running a co-simulation with the control scheme software plug-in. The injury criteria and tolerance levels for the biomechanical effects are discussed for each of the identified vulnerable body regions, such as the relative head displacement and the neck bending moment. The desired objective of this analytical model development is to study the performance of adaptive semi-active magnetorheological damper that can be used for vehicle-occupant protection technology enhancements to the seat design in a mine-resistant military vehicle.
Modified simulated annealing algorithm for flexible job-shop scheduling problem%柔性作业车间调度优化的改进模拟退火算法
李俊; 刘志雄; 张煜; 贺晶晶
2015-01-01
A modified simulated annealing algorithm was put forward to resolve the flexible job‐shop scheduling problem ,which used two kinds of individual encoding method respectively based on parti‐cle position rounding and roulette probability assignment in particle swarm algorithm .Three different local search methods were employed to constitute the neighborhood structure .The computational re‐sults show that the modified simulated annealing algorithm is more effective than particle swarm algo‐rithm ,hybrid particle swarm algorithm and simulated annealing algorithm in resolving the flexible job‐shop scheduling problem .Compared with the position rounding encoding method ,the roulette‐probability‐assignment‐based encoding method can render the algorithm more effective ,and the local search method based on crossing‐over operation is better than the other two search methods in impro‐ving the solving performance of the algorithm .%针对柔性作业车间调度问题，提出一种改进模拟退火算法来进行求解。该算法引入粒子群算法中的基于位置取整和基于轮盘赌两种个体编码方法，并采用3种不同的局部搜索方法来构造个体的邻域结构。算例计算表明，改进模拟退火算法在求解柔性作业车间调度问题时，比粒子群算法、混合粒子群算法以及模拟退火算法具有更好的求解性能，其中采用轮盘赌编码时，算法的求解性能要优于采用位置取整时的求解性能，且基于互换的局部搜索方法要优于其他两种局部搜索方法，能更有效地改善算法的求解性能。
An Overview of Approaches to Modernize Quantum Annealing Using Local Searches
Nicholas Chancellor
2016-06-01
Full Text Available I describe how real quantum annealers may be used to perform local (in state space searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm. The quantum annealing algorithm is an analogue of simulated annealing, a classical numerical technique which is now obsolete. Hence, I explore strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms, and additionally should be less sensitive to problem mis-specification then the traditional quantum annealing algorithm.
Adaptive life simulator: A novel approach to modeling the cardiovascular system
Kangas, L.J.; Keller, P.E.; Hashem, S. [and others
1995-06-01
In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Adaptive learning in agents behaviour: A framework for electricity markets simulation
Pinto, Tiago; Vale, Zita; Sousa, Tiago M.
2014-01-01
that combines several distinct strategies to build actions proposals, so that the best can be chosen at each time, depending on the context and simulation circumstances. The choosing process includes reinforcement learning algorithms, a mechanism for negotiating contexts analysis, a mechanism for the management...... players and simulates their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. This paper presents a methodology to provide decision support to electricity market negotiating players. This model...... allows integrating different strategic approaches for electricity market negotiations, and choosing the most appropriate one at each time, for each different negotiation context. This methodology is integrated in ALBidS (Adaptive Learning strategic Bidding System) – a multiagent system that provides...
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Adaptive life simulator: A novel approach to modeling the cardiovascular system
Kangas, L.J.; Keller, P.E.; Hashem, S. [and others
1995-06-01
In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Timofeev, E.V.; Tahir, R.B. [McGill Univ., Dept. of Mechanical Engineering, Montreal, Quebec (Canada)]. E-mail: evgeny.timofeev@mcgill.ca; Voinovich, P.A. [A.F. Ioffe Physical-Technical Inst., St. Petersburg Branch of the Joint Supercomputer Center, St. Petersburg (Russian Federation); Moelder, S. [Ryerson Polytechnic Univ., Toronto, Ontario (Canada)
2004-07-01
The concept of 'twin' grid nodes is discussed in the context of unstructured, adaptive meshes that are suitable for highly unsteady flows. The concept is applicable to internal boundary contours (within the computational domain) where the boundary conditions may need to be changed dynamically; for instance, an impermeable solid wall segment can be redefined as a fully permeable invisible boundary segment during the course of the simulation. This can be used to simulate unsteady gas flows with internal boundaries where the flow conditions may change rapidly and drastically. As a demonstration, the idea is applied to study the starting process in hypersonic air inlets by rupturing a diaphragm or by opening wall-perforations. (author)
An adaptive semi-implicit scheme for simulations of unsteady viscous compressible flows
Steinthorsson, Erlendur; Modiano, David; Crutchfield, William Y.; Bell, John B.; Colella, Phillip
1995-11-01
A numerical scheme for simulation of unsteady, viscous, compressible flows is considered. The scheme employs an explicit discretization of the inviscid terms of the Navier-Stokes equations and an implicit discretization of the viscous terms. The discretization is second order accurate in both space and time. Under appropriate assumptions, the implicit system of equations can be decoupled into two linear systems of reduced rank. These are solved efficiently using a Gauss-Seidel method with multigrid convergence acceleration. When coupled with a solution-adaptive mesh refinement technique, the hybrid explicit-implicit scheme provides an effective methodology for accurate simulations of unsteady viscous flows. The methodology is demonstrated for both body-fitted structured grids and for rectangular (Cartesian) grids.
A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides
de la Asunción, Marc; Castro, Manuel J.
2016-04-01
In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.
Simulation of a ground-layer adaptive optics system for the Kunlun Dark Universe Survey Telescope
Peng Jia; Sijiong Zhang
2013-01-01
Ground Layer Adaptive Optics (GLAO) is a recently developed technique extensively applied to ground-based telescopes,which mainly compensates for the wavefront errors induced by ground-layer turbulence to get an appropriate point spread function in a wide field of view.The compensation results mainly depend on the turbulence distribution.The atmospheric turbulence at Dome A in the Antarctic is mainly distributed below 15 meters,which is an ideal site for applications of GLAO.The GLAO system has been simulated for the Kunlun Dark Universe Survey Telescope,which will be set up at Dome A,and uses a rotating mirror to generate several laser guide stars and a wavefront sensor with a wide field of view to sequentially measure the wavefronts from different laser guide stars.The system is simulated on a computer and parameters of the system are given,which provide detailed information about the design of a practical GLAO system.
Vogel, Thomas
2015-01-01
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.
ADRC or adaptive controller--A simulation study on artificial blood pump.
Wu, Yi; Zheng, Qing
2015-11-01
Active disturbance rejection control (ADRC) has gained popularity because it requires little knowledge about the system to be controlled, has the inherent disturbance rejection ability, and is easy to tune and implement in practical systems. In this paper, the authors compared the performance of an ADRC and an adaptive controller for an artificial blood pump for end-stage congestive heart failure patients using only the feedback signal of pump differential pressure. The purpose of the control system was to provide sufficient perfusion when the patients' circulation system goes through different pathological and activity variations. Because the mean arterial pressure is equal to the total peripheral flow times the total peripheral resistance, this goal was converted to an expression of making the mean aortic pressure track a reference signal. The simulation results demonstrated that the performance of the ADRC is comparable to that of the adaptive controller with the saving of modeling and computational effort and fewer design parameters: total peripheral flow and mean aortic pressure with ADRC fall within the normal physiological ranges in activity variation (rest to exercise) and in pathological variation (left ventricular strength variation), similar to those values of adaptive controller.
Sanvicente Sanchez, H.; Solis, J. F.
2003-07-01
To set pipe diameters in the least-cost design of a water distribution network is a strong non linear restricted problem with multiple local optima and its solutions space has many unfeasible regions. The heuristic algorithm of optimization, called Simulated Annealing (SA), is a global method that has been used to make stochastic searches in the problem's solutions space battering the performance of other methods. This paper proposes a problem formulation with penalty functions, which lets SA algorithm, among another advantages, that the stochastic walk done by it could be less sinuous, crossing unfeasible regions. This approach improves the algorithm efficiency, for the same error level, with respect to a classical restricted formulation. (Author) 17 refs.
Ltaief, Hatem
2016-06-02
We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used as an operational testbed for simulating the de- sign of new instruments for the European Extremely Large Telescope project (E-ELT), the world\\'s biggest eye and one of Europe\\'s highest priorities in ground-based astronomy. The simulation corresponds to a multi-step multi-stage pro- cedure, which is fed, near real-time, by system and turbulence data coming from the telescope environment. Based on the PLASMA library powered by the OmpSs dynamic runtime system, our implementation relies on a task-based programming model to permit an asynchronous out-of-order execution. Using modern multicore architectures associated with the enormous computing power of GPUS, the resulting data-driven compute-intensive simulation of the entire MOAO application, composed of the tomographic reconstructor and the observing sequence, is capable of coping with the aforementioned real-time challenge and stands as a reference implementation for the computational astronomy community.
Adaptive multi-stage integrators for optimal energy conservation in molecular simulations
Fernández-Pendás, Mario; Akhmatskaya, Elena; Sanz-Serna, J. M.
2016-12-01
We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realizes the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrators. In particular, for the systems where harmonic forces play an important role, the sampling efficiency found in simulations using the AIA is up to 5 times better than the one achieved with other tested schemes.
Adaptations during the stance phase of gait for simulated flexion contractures at the knee.
Cerny, K; Perry, J; Walker, J M
1994-06-01
Adaptations in the stance phase of gait to knee flexion contractures simulated by a knee-ankle-foot orthosis were studied in 20 healthy women (mean age: 25 +/- 3.6 years). Stride characteristics, joint postures, floor reactions, and indwelling electromyographic activity of the lower gluteus maximus, vastus lateralis, long head of the biceps femoris, and soleus muscles were measured during walking with the orthosis, with and without contracture simulation. Simulated knee flexion contracture resulted in decreased stride length and velocity and increased forefoot weight bearing and flexion posture in stance. Increases were also seen in magnitude and/or duration of flexion floor reaction torques and gluteus maximus, vastus lateralis, and soleus muscle activity. The addition of a restriction of plantar flexion resulted in a further decrease in velocity and stride length and a small increase in hip extension posture. These results show that knee flexion contractures, simulated in healthy subjects, cause a decrease in gait function with a simultaneous increase in muscular demand.
Simulating spatial adaption of groundwater pumping on seawater intrusion in coastal regions
Grundmann, Jens; Ladwig, Robert; Schütze, Niels; Walther, Marc
2016-04-01
Coastal aquifer systems are used intensively to meet the growing demands for water in those regions. They are especially at risk for the intrusion of seawater due to aquifer overpumping, limited groundwater replenishment and unsustainable groundwater management which in turn also impacts the social and economical development of coastal regions. One example is the Al-Batinah coastal plain in northern Oman where irrigated agriculture is practiced by lots of small scaled farms in different distances from the sea, each of them pumping their water from coastal aquifer. Due to continuous overpumping and progressing saltwater intrusion farms near the coast had to close since water for irrigation got too saline. For investigating appropriate management options numerical density dependent groundwater modelling is required which should also portray the adaption of groundwater abstraction schemes on the water quality. For addressing this challenge a moving inner boundary condition is implemented in the numerical density dependent groundwater model which adjusts the locations for groundwater abstraction according to the position of the seawater intrusion front controlled by thresholds of relative chloride concentration. The adaption process is repeated for each management cycle within transient model simulations and allows for considering feedbacks with the consumers e.g. the agriculture by moving agricultural farms more inland or towards the sea if more fertile soils at the coast could be recovered. For finding optimal water management strategies efficiently, the behaviour of the numerical groundwater model for different extraction and replenishment scenarios is approximated by an artificial neural network using a novel approach for state space surrogate model development. Afterwards the derived surrogate is coupled with an agriculture module within a simulation based water management optimisation framework to achieve optimal cropping pattern and water abstraction schemes
Buntemeyer, Lars; Banerjee, Robi; Peters, Thomas; Klassen, Mikhail; Pudritz, Ralph E.
2016-02-01
We present an algorithm for solving the radiative transfer problem on massively parallel computers using adaptive mesh refinement and domain decomposition. The solver is based on the method of characteristics which requires an adaptive raytracer that integrates the equation of radiative transfer. The radiation field is split into local and global components which are handled separately to overcome the non-locality problem. The solver is implemented in the framework of the magneto-hydrodynamics code FLASH and is coupled by an operator splitting step. The goal is the study of radiation in the context of star formation simulations with a focus on early disc formation and evolution. This requires a proper treatment of radiation physics that covers both the optically thin as well as the optically thick regimes and the transition region in particular. We successfully show the accuracy and feasibility of our method in a series of standard radiative transfer problems and two 3D collapse simulations resembling the early stages of protostar and disc formation.