WorldWideScience

Sample records for adaptive simulated annealing

  1. Simulated annealing in adaptive optics for imaging the eye retina

    International Nuclear Information System (INIS)

    Zommer, S.; Adler, J.; Lipson, S. G.; Ribak, E.

    2004-01-01

    Full Text:Adaptive optics is a method designed to correct deformed images in real time. Once the distorted wavefront is known, a deformable mirror is used to compensate the aberrations and return the wavefront to a plane wave. This study concentrates on methods that omit wave front sensing from the reconstruction process. Such methods use stochastic algorithms to find the extremum of a certain sharpness function, thereby correcting the image without any information on the wavefront. Theoretical work [l] has shown that the optical problem can be mapped onto a model for crystal roughening. The main algorithm applied is simulated annealing. We present a first hardware realization of this algorithm in an adaptive optics system designed to image the retina of the human eye

  2. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Zheng, E-mail: 19994035@sina.com [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Wang, Jun; Zhou, Bihua [National Defense Key Laboratory on Lightning Protection and Electromagnetic Camouflage, PLA University of Science and Technology, Nanjing 210007 (China); Zhou, Shudao [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044 (China)

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  3. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    International Nuclear Information System (INIS)

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-01-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm

  4. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    Energy Technology Data Exchange (ETDEWEB)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com [Master Program of Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia); Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id [Global Geophysical Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia)

    2015-04-24

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  5. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  6. Placement by thermodynamic simulated annealing

    International Nuclear Information System (INIS)

    Vicente, Juan de; Lanchares, Juan; Hermida, Roman

    2003-01-01

    Combinatorial optimization problems arise in different fields of science and engineering. There exist some general techniques coping with these problems such as simulated annealing (SA). In spite of SA success, it usually requires costly experimental studies in fine tuning the most suitable annealing schedule. In this Letter, the classical integrated circuit placement problem is faced by Thermodynamic Simulated Annealing (TSA). TSA provides a new annealing schedule derived from thermodynamic laws. Unlike SA, temperature in TSA is free to evolve and its value is continuously updated from the variation of state functions as the internal energy and entropy. Thereby, TSA achieves the high quality results of SA while providing interesting adaptive features

  7. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2014-01-01

    Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  8. Optimization of PWR fuel assembly radial enrichment and burnable poison location based on adaptive simulated annealing

    International Nuclear Information System (INIS)

    Rogers, Timothy; Ragusa, Jean; Schultz, Stephen; St Clair, Robert

    2009-01-01

    The focus of this paper is to present a concurrent optimization scheme for the radial pin enrichment and burnable poison location in PWR fuel assemblies. The methodology is based on the Adaptive Simulated Annealing (ASA) technique, coupled with a neutron lattice physics code to update the cost function values. In this work, the variations in the pin U-235 enrichment are variables to be optimized radially, i.e., pin by pin. We consider the optimization of two categories of fuel assemblies, with and without Gadolinium burnable poison pins. When burnable poisons are present, both the radial distribution of enrichment and the poison locations are variables in the optimization process. Results for 15 x 15 PWR fuel assembly designs are provided.

  9. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    Science.gov (United States)

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  10. On lumped models for thermodynamic properties of simulated annealing problems

    International Nuclear Information System (INIS)

    Andresen, B.; Pedersen, J.M.; Salamon, P.; Hoffmann, K.H.; Mosegaard, K.; Nulton, J.

    1987-01-01

    The paper describes a new method for the estimation of thermodynamic properties for simulated annealing problems using data obtained during a simulated annealing run. The method works by estimating energy-to-energy transition probabilities and is well adapted to simulations such as simulated annealing, in which the system is never in equilibrium. (orig.)

  11. Simulated annealing and circuit layout

    NARCIS (Netherlands)

    Aarts, E.H.L.; Laarhoven, van P.J.M.

    1991-01-01

    We discuss the problem of approximately sotvlng circuit layout problems by simulated annealing. For this we first summarize the theoretical concepts of the simulated annealing algorithm using Ihe theory of homogeneous and inhomogeneous Markov chains. Next we briefly review general aspects of the

  12. Global optimization and simulated annealing

    NARCIS (Netherlands)

    Dekkers, A.; Aarts, E.H.L.

    1988-01-01

    In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing

  13. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  14. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  15. Very fast simulated re-annealing

    OpenAIRE

    L. Ingber

    1989-01-01

    Draft An algorithm is developed to statistically find the best global fit of a nonlinear non-convex cost-function over a D-dimensional space. It is argued that this algorithm permits an annealing schedule for ‘‘temperature’’ T decreasing exponentially in annealing-time k, T = T0 exp(−ck1/D). The introduction of re-annealing also permits adaptation to changing sensitivities in the multidimensional parameter-space. This annealing schedule is faster than fast Cauchy annealing, ...

  16. Simulated annealing with constant thermodynamic speed

    International Nuclear Information System (INIS)

    Salamon, P.; Ruppeiner, G.; Liao, L.; Pedersen, J.

    1987-01-01

    Arguments are presented to the effect that the optimal annealing schedule for simulated annealing proceeds with constant thermodynamic speed, i.e., with dT/dt = -(v T)/(ε-√C), where T is the temperature, ε- is the relaxation time, C ist the heat capacity, t is the time, and v is the thermodynamic speed. Experimental results consistent with this conjecture are presented from simulated annealing on graph partitioning problems. (orig.)

  17. Cylinder packing by simulated annealing

    Directory of Open Access Journals (Sweden)

    M. Helena Correia

    2000-12-01

    Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.

  18. Energy and Delay Optimization of Heterogeneous Multicore Wireless Multimedia Sensor Nodes by Adaptive Genetic-Simulated Annealing Algorithm

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2018-01-01

    Full Text Available Energy efficiency and delay optimization are significant for the proliferation of wireless multimedia sensor network (WMSN. In this article, an energy-efficient, delay-efficient, hardware and software cooptimization platform is researched to minimize the energy cost while guaranteeing the deadline of the real-time WMSN tasks. First, a multicore reconfigurable WMSN hardware platform is designed and implemented. This platform uses both the heterogeneous multicore architecture and the dynamic voltage and frequency scaling (DVFS technique. By this means, the nodes can adjust the hardware characteristics dynamically in terms of the software run-time contexts. Consequently, the software can be executed more efficiently with less energy cost and shorter execution time. Then, based on this hardware platform, an energy and delay multiobjective optimization algorithm and a DVFS adaption algorithm are investigated. These algorithms aim to search out the global energy optimization solution within the acceptable calculation time and strip the time redundancy in the task executing process. Thus, the energy efficiency of the WMSN node can be improved significantly even under strict constraint of the execution time. Simulation and real-world experiments proved that the proposed approaches can decrease the energy cost by more than 29% compared to the traditional single-core WMSN node. Moreover, the node can react quickly to the time-sensitive events.

  19. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  20. Thin-film designs by simulated annealing

    Science.gov (United States)

    Boudet, T.; Chaton, P.; Herault, L.; Gonon, G.; Jouanet, L.; Keller, P.

    1996-11-01

    With the increasing power of computers, new methods in synthesis of optical multilayer systems have appeared. Among these, the simulated-annealing algorithm has proved its efficiency in several fields of physics. We propose to show its performances in the field of optical multilayer systems through different filter designs.

  1. Job shop scheduling by simulated annealing

    NARCIS (Netherlands)

    Laarhoven, van P.J.M.; Aarts, E.H.L.; Lenstra, J.K.

    1992-01-01

    We describe an approximation algorithm for the problem of finding the minimum makespan in a job shop. The algorithm is based on simulated annealing, a generalization of the well known iterative improvement approach to combinatorial optimization problems. The generalization involves the acceptance of

  2. Finite-time thermodynamics and simulated annealing

    International Nuclear Information System (INIS)

    Andresen, B.

    1989-01-01

    When the general, global optimization technique simulated annealing was introduced by Kirkpatrick et al. (1983), this mathematical algorithm was based on an analogy to the statistical mechanical behavior of real physical systems like spin glasses, hence the name. In the intervening span of years the method has proven exceptionally useful for a great variety of extremely complicated problems, notably NP-problems like the travelling salesman, DNA sequencing, and graph partitioning. Only a few highly optimized heuristic algorithms (e.g. Lin, Kernighan 1973) have outperformed simulated annealing on their respective problems (Johnson et al. 1989). Simulated annealing in its current form relies only on the static quantity 'energy' to describe the system, whereas questions of rate, as in the temperature path (annealing schedule, see below), are left to intuition. We extent the connection to physical systems and take over further components from thermodynamics like ensemble, heat capacity, and relaxation time. Finally we refer to finite-time thermodynamics (Andresen, Salomon, Berry 1984) for a dynamical estimate of the optimal temperature path. (orig.)

  3. A note on simulated annealing to computer laboratory scheduling ...

    African Journals Online (AJOL)

    The concepts, principles and implementation of simulated Annealing as a modem heuristic technique is presented. Simulated Annealing algorithm is used in solving real life problem of Computer Laboratory scheduling in order to maximize the use of scarce and insufficient resources. KEY WORDS: Simulated Annealing ...

  4. Learning FCM by chaotic simulated annealing

    International Nuclear Information System (INIS)

    Alizadeh, Somayeh; Ghazanfari, Mehdi

    2009-01-01

    Fuzzy cognitive map (FCM) is a directed graph, which shows the relations between essential components in complex systems. It is a very convenient, simple, and powerful tool, which is used in numerous areas of application. Experts who are familiar with the system components and their relations can generate a related FCM. There is a big gap when human experts cannot produce FCM or even there is no expert to produce the related FCM. Therefore, a new mechanism must be used to bridge this gap. In this paper, a novel learning method is proposed to construct FCM by using Chaotic simulated annealing (CSA). The proposed method not only is able to construct FCM graph topology but also is able to extract the weight of the edges from input historical data. The efficiency of the proposed method is shown via comparison of its results of some numerical examples with those of Simulated annealing (SA) method.

  5. Simulated annealing algorithm for optimal capital growth

    Science.gov (United States)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  6. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  7. Binary Sparse Phase Retrieval via Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Wei Peng

    2016-01-01

    Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.

  8. Reactor controller design using genetic algorithms with simulated annealing

    International Nuclear Information System (INIS)

    Erkan, K.; Buetuen, E.

    2000-01-01

    This chapter presents a digital control system for ITU TRIGA Mark-II reactor using genetic algorithms with simulated annealing. The basic principles of genetic algorithms for problem solving are inspired by the mechanism of natural selection. Natural selection is a biological process in which stronger individuals are likely to be winners in a competing environment. Genetic algorithms use a direct analogy of natural evolution. Genetic algorithms are global search techniques for optimisation but they are poor at hill-climbing. Simulated annealing has the ability of probabilistic hill-climbing. Thus, the two techniques are combined here to get a fine-tuned algorithm that yields a faster convergence and a more accurate search by introducing a new mutation operator like simulated annealing or an adaptive cooling schedule. In control system design, there are currently no systematic approaches to choose the controller parameters to obtain the desired performance. The controller parameters are usually determined by test and error with simulation and experimental analysis. Genetic algorithm is used automatically and efficiently searching for a set of controller parameters for better performance. (orig.)

  9. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  10. Simulated annealing for tensor network states

    International Nuclear Information System (INIS)

    Iblisdir, S

    2014-01-01

    Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)

  11. MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    Ladislav Rosocha

    2015-07-01

    Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.

  12. PENJADWALAN FLOWSHOP DENGAN MENGGUNAKAN SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    Muhammad Firdaus

    2015-04-01

    Full Text Available This article apply a machine scheduling technique, named Simulate Annealing (SA to schedule 8 jobs and 5 machines to minimize makespan. A flowshop production flow is chosen as a case study to collect data and attempted to reduce jobs’ makespan. This article also does a sensitivity analysis to explore the implication of the changes of SA parameters as temperature. The results shows that the completion time of the jobs uses SA algoritm can decrease the completion time of the jobs, about 5 hours lower than the existing method. Moreover, total idle time of the machines is also reduced by 2.18 per cent using SA technique. Based on the sensitivity analysis, it indicates that there is a significant relationship between the changes of temperatures and makespan and computation time.

  13. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  14. Conventional treatment planning optimization using simulated annealing

    International Nuclear Information System (INIS)

    Morrill, S.M.; Langer, M.; Lane, R.G.

    1995-01-01

    Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment

  15. Simulated annealing approach for solving economic load dispatch ...

    African Journals Online (AJOL)

    user

    thermodynamics to solve economic load dispatch (ELD) problems. ... evolutionary programming algorithm has been successfully applied for solving the ... concept behind the simulated annealing (SA) optimization is discussed in Section 3.

  16. Simulated annealing image reconstruction for positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sundermann, E; Lemahieu, I; Desmedt, P [Department of Electronics and Information Systems, University of Ghent, St. Pietersnieuwstraat 41, B-9000 Ghent, Belgium (Belgium)

    1994-12-31

    In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors). 11 refs., 2 figs.

  17. Simulated annealing image reconstruction for positron emission tomography

    International Nuclear Information System (INIS)

    Sundermann, E.; Lemahieu, I.; Desmedt, P.

    1994-01-01

    In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors)

  18. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Science.gov (United States)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  19. On simulated annealing phase transitions in phylogeny reconstruction.

    Science.gov (United States)

    Strobl, Maximilian A R; Barker, Daniel

    2016-08-01

    Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Angular filter refractometry analysis using simulated annealing.

    Science.gov (United States)

    Angland, P; Haberberger, D; Ivancic, S T; Froula, D H

    2017-10-01

    Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.

  1. Simulated annealing algorithm for reactor in-core design optimizations

    International Nuclear Information System (INIS)

    Zhong Wenfa; Zhou Quan; Zhong Zhaopeng

    2001-01-01

    A nuclear reactor must be optimized for in core fuel management to make full use of the fuel, to reduce the operation cost and to flatten the power distribution reasonably. The author presents a simulated annealing algorithm. The optimized objective function and the punishment function were provided for optimizing the reactor physics design. The punishment function was used to practice the simulated annealing algorithm. The practical design of the NHR-200 was calculated. The results show that the K eff can be increased by 2.5% and the power distribution can be flattened

  2. Music playlist generation by adapted simulated annealing

    NARCIS (Netherlands)

    Pauws, S.C.; Verhaegh, W.F.J.; Vossen, M.P.H.

    2008-01-01

    We present the design of an algorithm for use in an interactivemusic system that automatically generates music playlists that fit the music preferences of a user. To this end, we introduce a formal model, define the problem of automatic playlist generation (APG), and proof its NP-hardness. We use a

  3. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  4. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  5. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  6. Molecular dynamics simulation of annealed ZnO surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Min, Tjun Kit; Yoon, Tiem Leong [School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Lim, Thong Leng [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia)

    2015-04-24

    The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.

  7. Ranking important nodes in complex networks by simulated annealing

    International Nuclear Information System (INIS)

    Sun Yu; Yao Pei-Yang; Shen Jian; Zhong Yun; Wan Lu-Jun

    2017-01-01

    In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. (paper)

  8. Selection of views to materialize using simulated annealing algorithms

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  9. Annealing simulation of cascade damage using MARLOWE-DAIQUIRI codes

    International Nuclear Information System (INIS)

    Muroga, Takeo

    1984-01-01

    The localization effect of the defects generated by the cascade damage on the properties of solids was studied by using a computer code. The code is based on the two-body collision approximation method and the Monte Carlo method. The MARLOWE and DAIQUIRI codes were partly improved to fit the present calculation of the annealing of cascade damage. The purpose of this study is to investigate the behavior of defects under the simulated reactive and irradiation condition. Calculation was made for alpha iron (BCC), and the threshold energy was set at 40 eV. The temperature dependence of annealing and the growth of a cluster were studied. The overlapping effect of cascade was studied. At first, the extreme case of overlapping was studied, then the practical cases were estimated by interpolation. The state of overlapping of cascade corresponded to the irradiation speed. The interaction between cascade and dislocations was studied, and the calculation of the annealing of primary knock-out atoms (PKA) in alpha iron was performed. At low temperature, the effect of dislocations was large, but the growth of vacancy was not seen. At high temperature, the effect of dislocations was small. The evaluation of the simulation of various ion irradiation and the growth efficiency of defects were performed. (Kato, T.)

  10. A Simulated Annealing-Based Heuristic Algorithm for Job Shop Scheduling to Minimize Lateness

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-04-01

    Full Text Available A decomposition-based optimization algorithm is proposed for solving large job shop scheduling problems with the objective of minimizing the maximum lateness. First, we use the constraint propagation theory to derive the orientation of a portion of disjunctive arcs. Then we use a simulated annealing algorithm to find a decomposition policy which satisfies the maximum number of oriented disjunctive arcs. Subsequently, each subproblem (corresponding to a subset of operations as determined by the decomposition policy is successively solved with a simulated annealing algorithm, which leads to a feasible solution to the original job shop scheduling problem. Computational experiments are carried out for adapted benchmark problems, and the results show the proposed algorithm is effective and efficient in terms of solution quality and time performance.

  11. Restoration of polarimetric SAR images using simulated annealing

    DEFF Research Database (Denmark)

    Schou, Jesper; Skriver, Henning

    2001-01-01

    approach favoring one of the objectives. An algorithm for estimating the radar cross-section (RCS) for intensity SAR images has previously been proposed in the literature based on Markov random fields and the stochastic optimization method simulated annealing. A new version of the algorithm is presented......Filtering synthetic aperture radar (SAR) images ideally results in better estimates of the parameters characterizing the distributed targets in the images while preserving the structures of the nondistributed targets. However, these objectives are normally conflicting, often leading to a filtering...

  12. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  13. Analysis of Trivium by a Simulated Annealing variant

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2010-01-01

    This paper proposes a new method of solving certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations....... A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...

  14. Stochastic annealing simulations of defect interactions among subcascades

    Energy Technology Data Exchange (ETDEWEB)

    Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N.

    1997-04-01

    The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.

  15. Optimisation of electron beam characteristics by simulated annealing

    International Nuclear Information System (INIS)

    Ebert, M.A.; University of Adelaide, SA; Hoban, P.W.

    1996-01-01

    Full text: With the development of technology in the field of treatment beam delivery, the possibility of tailoring radiation beams (via manipulation of the beam's phase space) is foreseeable. This investigation involved evaluating a method for determining the characteristics of pure electron beams which provided dose distributions that best approximated desired distributions. The aim is to determine which degrees of freedom are advantageous and worth pursuing in a clinical setting. A simulated annealing routine was developed to determine optimum electron beam characteristics. A set of beam elements are defined at the surface of a homogeneous water equivalent phantom defining discrete positions and angles of incidence, and electron energies. The optimal weighting of these elements is determined by the (generally approximate) solution to the linear equation, Dw = d, where d represents the dose distribution calculated over the phantom, w the vector of (50 - 2x10 4 ) beam element relative weights, and D a normalised matrix of dose deposition kernels. In the iterative annealing procedure, beam elements are randomly selected and beam weighting distributions are sampled and used to perturb the selected elements. Perturbations are accepted or rejected according to standard simulated annealing criteria. The result (after the algorithm has terminated due to meeting an iteration or optimisation specification) is an approximate solution for the beam weight vector (w) specified by the above equation. This technique has been applied for several sample dose distributions and phase space restrictions. An example is given of the phase space obtained when endeavouring to conform to a rectangular 100% dose region with polyenergetic though normally incident electrons. For regular distributions, intuitive conclusions regarding the benefits of energy/angular manipulation may be made, whereas for complex distributions, variations in intensity over beam elements of varying energy and

  16. Simulated Annealing-Based Krill Herd Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Gai-Ge Wang

    2013-01-01

    Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.

  17. Differential evolution-simulated annealing for multiple sequence alignment

    Science.gov (United States)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  18. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    Directory of Open Access Journals (Sweden)

    Hailong Wang

    2018-01-01

    Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

  19. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  20. Geometric Optimization of Thermo-electric Coolers Using Simulated Annealing

    International Nuclear Information System (INIS)

    Khanh, D V K; Vasant, P M; Elamvazuthi, I; Dieu, V N

    2015-01-01

    The field of thermo-electric coolers (TECs) has grown drastically in recent years. In an extreme environment as thermal energy and gas drilling operations, TEC is an effective cooling mechanism for instrument. However, limitations such as the relatively low energy conversion efficiency and ability to dissipate only a limited amount of heat flux may seriously damage the lifetime and performance of the instrument. Until now, many researches were conducted to expand the efficiency of TECs. The material parameters are the most significant, but they are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of finding the optimal TECs design is to define a set of design parameters. In this paper, a new method of optimizing the dimension of TECs using simulated annealing (SA), to maximize the rate of refrigeration (ROR) was proposed. Equality constraint and inequality constraint were taken into consideration. This work reveals that SA shows better performance than Cheng's work. (paper)

  1. Memoryless cooperative graph search based on the simulated annealing algorithm

    International Nuclear Information System (INIS)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment. (interdisciplinary physics and related areas of science and technology)

  2. Simulated annealing and joint manufacturing batch-sizing

    Directory of Open Access Journals (Sweden)

    Sarker Ruhul

    2003-01-01

    Full Text Available We address an important problem of a manufacturing system. The system procures raw materials from outside suppliers in a lot and processes them to produce finished goods. It proposes an ordering policy for raw materials to meet the requirements of a production facility. In return, this facility has to deliver finished products demanded by external buyers at fixed time intervals. First, a general cost model is developed considering both raw materials and finished products. Then this model is used to develop a simulated annealing approach to determining an optimal ordering policy for procurement of raw materials and also for the manufacturing batch size to minimize the total cost for meeting customer demands in time. The solutions obtained were compared with those of traditional approaches. Numerical examples are presented. .

  3. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  4. Enhanced Simulated Annealing for Solving Aggregate Production Planning

    Directory of Open Access Journals (Sweden)

    Mohd Rizam Abu Bakar

    2016-01-01

    Full Text Available Simulated annealing (SA has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS, the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy.

  5. PERBANDINGAN KINERJA ALGORITMA GENETIKA DAN SIMULATED ANNEALING UNTUK MASALAH MULTIPLE OBJECTIVE PADA PENJADWALAN FLOWSHOP

    Directory of Open Access Journals (Sweden)

    I Gede Agus Widyadana

    2002-01-01

    Full Text Available The research is focused on comparing Genetics algorithm and Simulated Annealing in the term of performa and processing time. The main purpose is to find out performance both of the algorithm to solve minimizing makespan and total flowtime in a particular flowshop system. Performances of the algorithms are found by simulating problems with variation of jobs and machines combination. The result show the Simulated Annealing is much better than the Genetics up to 90%. The Genetics, however, only had score in processing time, but the trend that plotted suggest that in problems with lots of jobs and lots of machines, the Simulated Annealing will run much faster than the Genetics. Abstract in Bahasa Indonesia : Penelitian ini difokuskan pada pembandingan algoritma Genetika dan Simulated Annealing ditinjau dari aspek performa dan waktu proses. Tujuannya adalah untuk melihat kemampuan dua algoritma tersebut untuk menyelesaikan problem-problem penjadwalan flow shop dengan kriteria minimasi makespan dan total flowtime. Kemampuan kedua algoritma tersebut dilihat dengan melakukan simulasi yang dilakukan pada kombinasi-kombinasi job dan mesin yang berbeda-beda. Hasil simulasi menunjukan algoritma Simulated Annealing lebih unggul dari algoritma Genetika hingga 90%, algoritma Genetika hanya unggul pada waktu proses saja, namun dengan tren waktu proses yang terbentuk, diyakini pada problem dengan kombinasi job dan mesin yang banyak, algoritma Simulated Annealing dapat lebih cepat daripada algoritma Genetika. Kata kunci: Algoritma Genetika, Simulated Annealing, flow shop, makespan, total flowtime.

  6. Finding a Hadamard matrix by simulated annealing of spin vectors

    Science.gov (United States)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  7. Simulated Annealing Technique for Routing in a Rectangular Mesh Network

    Directory of Open Access Journals (Sweden)

    Noraziah Adzhar

    2014-01-01

    Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.

  8. Simulation of short-term annealing of displacement cascades in FCC metals

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Doran, D.G.; Schwartz, D.M.

    1980-01-01

    Computer models have been developed for the simulation of high energy displacement cascades. The objective is the generation of defect production functions for use in correlation analysis of radiation effects in fusion reactor materials. In particular, the stochastic cascade annealing simulation code SCAS has been developed and used to model the short-term annealing behavior of simulated cascades in FCC metals. The code is fast enough to make annealing of high energy cascades practical. Sets of cascades from 5 keV to 100 keV in copper were generated by the binary collision code MARLOWE

  9. Differential evolution and simulated annealing algorithms for mechanical systems design

    Directory of Open Access Journals (Sweden)

    H. Saruhan

    2014-09-01

    Full Text Available In this study, nature inspired algorithms – the Differential Evolution (DE and the Simulated Annealing (SA – are utilized to seek a global optimum solution for ball bearings link system assembly weight with constraints and mixed design variables. The Genetic Algorithm (GA and the Evolution Strategy (ES will be a reference for the examination and validation of the DE and the SA. The main purpose is to minimize the weight of an assembly system composed of a shaft and two ball bearings. Ball bearings link system is used extensively in many machinery applications. Among mechanical systems, designers pay great attention to the ball bearings link system because of its significant industrial importance. The problem is complex and a time consuming process due to mixed design variables and inequality constraints imposed on the objective function. The results showed that the DE and the SA performed and obtained convergence reliability on the global optimum solution. So the contribution of the DE and the SA application to the mechanical system design can be very useful in many real-world mechanical system design problems. Beside, the comparison confirms the effectiveness and the superiority of the DE over the others algorithms – the SA, the GA, and the ES – in terms of solution quality. The ball bearings link system assembly weight of 634,099 gr was obtained using the DE while 671,616 gr, 728213.8 gr, and 729445.5 gr were obtained using the SA, the ES, and the GA respectively.

  10. Sensitivity study on hydraulic well testing inversion using simulated annealing

    International Nuclear Information System (INIS)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion

  11. Sensitivity study on hydraulic well testing inversion using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.

  12. New technique for global solar radiation forecasting by simulated annealing and genetic algorithms using

    International Nuclear Information System (INIS)

    Tolabi, H.B.; Ayob, S.M.

    2014-01-01

    In this paper, a novel approach based on simulated annealing algorithm as a meta-heuristic method is implemented in MATLAB software to estimate the monthly average daily global solar radiation on a horizontal surface for six different climate cities of Iran. A search method based on genetic algorithm is applied to accelerate problem solving. Results show that simulated annealing based on genetic algorithm search is a suitable method to find the global solar radiation. (author)

  13. ACTIVITY-BASED COSTING DAN SIMULATED ANNEALING UNTUK PENCARIAN RUTE PADA FLEXIBLE MANUFACTURING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Gregorius Satia Budhi

    2003-01-01

    Full Text Available Flexible Manufacturing System (FMS is a manufacturing system that is formed from several Numerical Controlled Machines combine with material handling system, so that different jobs can be worked by different machines sequences. FMS combine the high productivity and flexibility of Transfer Line and Job Shop manufacturing system. In this reasearch, Activity-Based Costing(ABC approach was used as the weight to search the operation route in the proper machine, so that the total production cost can be optimized. The search method that was used in this experiment is Simulated Annealling, a variant form Hill Climbing Search method. An ideal operation time to proses a part was used as the annealling schedule. From the empirical test, it could be proved that the use of ABC approach and Simulated Annealing to search the route (routing process can optimize the Total Production Cost. In the other hand, the use of ideal operation time to process a part as annealing schedule can control the processing time well. Abstract in Bahasa Indonesia : Flexible Manufacturing System (FMS adalah sistem manufaktur yang tersusun dari mesin-mesin Numerical Control (NC yang dikombinasi dengan Sistem Penanganan Material, sehingga job-job berbeda dikerjakan oleh mesin-mesin dengan alur yang berlainan. FMS menggabungkan produktifitas dan fleksibilitas yang tinggi dari Sistem Manufaktur Transfer Line dan Job Shop. Pada riset ini pendekatan Activity-Based Costing (ABC digunakan sebagai bobot / weight dalam pencarian rute operasi pada mesin yang tepat, untuk lebih mengoptimasi biaya produksi secara keseluruhan. Adapun metode Searching yang digunakan adalah Simulated Annealing yang merupakan varian dari metode searching Hill Climbing. Waktu operasi ideal untuk memproses sebuah part digunakan sebagai Annealing Schedulenya. Dari hasil pengujian empiris dapat dibuktikan bahwa penggunaan pendekatan ABC dan Simulated Annealing untuk proses pencarian rute (routing dapat lebih

  14. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    Science.gov (United States)

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  15. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  16. New heating schedule in hydrogen annealing furnace based on process simulation for less energy consumption

    International Nuclear Information System (INIS)

    Saboonchi, Ahmad; Hassanpour, Saeid; Abbasi, Shahram

    2008-01-01

    Cold rolled steel coils are annealed in batch furnaces to obtain desirable mechanical properties. Annealing operations involve heating and cooling cycles which take long due to high weight of the coils under annealing. To reduce annealing time, a simulation code was developed that is capable of evaluating more effective schedules for annealing coils during the heating process. This code is additionally capable of accurate determination of furnace turn-off time for different coil weights and charge dimensions. After studying many heating schedules and considering heat transfer mechanism in the annealing furnace, a new schedule with the most advantages was selected as the new operation conditions in the hydrogen annealing plant. The performance of all the furnaces were adjusted to the new heating schedule after experiments had been carried out to ensure the accuracy of the code and the fitness of the new operation condition. Comparison of similar yield of cold rolled coils over two months revealed that specific energy consumption of furnaces under the new heating schedule decreased by 11%, heating cycle time by 16%, and the hydrogen consumption by 14%

  17. New heating schedule in hydrogen annealing furnace based on process simulation for less energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Saboonchi, Ahmad [Department of Mechanical Engineering, Isfahan University of Technology, Isfahan 84154 (Iran); Hassanpour, Saeid [Rayan Tahlil Sepahan Co., Isfahan Science and Technology Town, Isfahan 84155 (Iran); Abbasi, Shahram [R and D Department, Mobarakeh Steel Complex, Isfahan (Iran)

    2008-11-15

    Cold rolled steel coils are annealed in batch furnaces to obtain desirable mechanical properties. Annealing operations involve heating and cooling cycles which take long due to high weight of the coils under annealing. To reduce annealing time, a simulation code was developed that is capable of evaluating more effective schedules for annealing coils during the heating process. This code is additionally capable of accurate determination of furnace turn-off time for different coil weights and charge dimensions. After studying many heating schedules and considering heat transfer mechanism in the annealing furnace, a new schedule with the most advantages was selected as the new operation conditions in the hydrogen annealing plant. The performance of all the furnaces were adjusted to the new heating schedule after experiments had been carried out to ensure the accuracy of the code and the fitness of the new operation condition. Comparison of similar yield of cold rolled coils over two months revealed that specific energy consumption of furnaces under the new heating schedule decreased by 11%, heating cycle time by 16%, and the hydrogen consumption by 14%. (author)

  18. Simulation, hardware implementation and control of a multilevel inverter with simulated annealing algorithm

    Directory of Open Access Journals (Sweden)

    Fayçal Chabni

    2017-09-01

    Full Text Available Harmonic pollution is a very common issue in the field of power electronics, Harmonics can cause multiple problems for power converters and electrical loads alike, this paper introduces a modulation method called selective harmonic elimination pulse width modulation (SHEPWM, this method allows the elimination of a specific order of harmonics and also control the amplitude of the fundamental component of the output voltage. In this work SHEPWM strategy is applied to a five level cascade inverter. The objective of this study is to demonstrate the total control provided by the SHEPWM strategy over any rank of harmonics using the simulated annealing optimization algorithm and also control the amplitude of the fundamental component at any desired value. Simulation and experimental results are presented in this work.

  19. Simulation of the diffusion of implanted impurities in silicon structures at the rapid thermal annealing

    International Nuclear Information System (INIS)

    Komarov, F.F.; Komarov, A.F.; Mironov, A.M.; Makarevich, Yu.V.; Miskevich, S.A.; Zayats, G.M.

    2011-01-01

    Physical and mathematical models and numerical simulation of the diffusion of implanted impurities during rapid thermal treatment of silicon structures are discussed. The calculation results correspond to the experimental results with a sufficient accuracy. A simulation software system has been developed that is integrated into ATHENA simulation system developed by Silvaco Inc. This program can simulate processes of the low-energy implantation of B, BF 2 , P, As, Sb, C ions into the silicon structures and subsequent rapid thermal annealing. (authors)

  20. Annealing of ion irradiated high TC Josephson junctions studied by numerical simulations

    International Nuclear Information System (INIS)

    Sirena, M.; Matzen, S.; Bergeal, N.; Lesueur, J.; Faini, G.; Bernard, R.; Briatico, J.; Crete, D. G.

    2009-01-01

    Recently, annealing of ion irradiated high T c Josephson iunctions (JJs) has been studied experimentally in the perspective of improving their reproducibility. Here we present numerical simulations based on random walk and Monte Carlo calculations of the evolution of JJ characteristics such as the transition temperature T c ' and its spread ΔT c ' , and compare them with experimental results on junctions irradiated with 100 and 150 keV oxygen ions, and annealed at low temperatures (below 80 deg. C). We have successfully used a vacancy-interstitial annihilation mechanism to describe the evolution of the T c ' and the homogeneity of a JJ array, analyzing the evolution of the defects density mean value and its distribution width. The annealing first increases the spread in T c ' for short annealing times due to the stochastic nature of the process, but then tends to reduce it for longer times, which is interesting for technological applications

  1. Optimization of pressurized water reactor shuffling by simulated annealing with heuristics

    International Nuclear Information System (INIS)

    Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.

    1995-01-01

    Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed

  2. Simulated annealing to handle energy and ancillary services joint management considering electric vehicles

    DEFF Research Database (Denmark)

    Sousa, Tiago M; Soares, Tiago; Morais, Hugo

    2016-01-01

    The massive use of distributed generation and electric vehicles will lead to a more complex management of the power system, requiring new approaches to be used in the optimal resource scheduling field. Electric vehicles with vehicle-to-grid capability can be useful for the aggregator players...... in the mitigation of renewable sources intermittency and in the ancillary services procurement. In this paper, an energy and ancillary services joint management model is proposed. A simulated annealing approach is used to solve the joint management for the following day, considering the minimization...... of the aggregator total operation costs. The case study considers a distribution network with 33-bus, 66 distributed generation and 2000 electric vehicles. The proposed simulated annealing is matched with a deterministic approach allowing an effective and efficient comparison. The simulated annealing presents...

  3. Optimization of the energy production for the Baghdara hydropower plant in Afghanistan using simulated annealing; Optimierung der Energieerzeugung fuer das Wasserkraftwerk Baghdara in Afghanistan mit simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Ayros, E.; Hildebrandt, H.; Peissner, K. [Fichtner GmbH und Co. KG, Stuttgart (Germany). Wasserbau und Wasserkraftwerke; Bardossy, A. [Stuttgart Univ. (Germany). Inst. fuer Wasserbau

    2008-07-01

    Simulated Annealing (SA) is an optimization method analogous to the thermodynamic method and is a new alternative for optimising the energy production of hydropower systems with storage capabilities. The SA-Algorithm is presented here and it was applied for the maximization of the energy production of the Baghdara hydropower plant in Afghanistan. The results were also compared with a non-linear optimization method NLP. (orig.)

  4. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations

    Science.gov (United States)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias

    2015-01-01

    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  5. Ideal versus real: simulated annealing of experimentally derived and geometric platinum nanoparticles

    Science.gov (United States)

    Ellaby, Tom; Aarons, Jolyon; Varambhia, Aakash; Jones, Lewys; Nellist, Peter; Ozkaya, Dogan; Sarwar, Misbah; Thompsett, David; Skylaris, Chris-Kriton

    2018-04-01

    Platinum nanoparticles find significant use as catalysts in industrial applications such as fuel cells. Research into their design has focussed heavily on nanoparticle size and shape as they greatly influence activity. Using high throughput, high precision electron microscopy, the structures of commercially available Pt catalysts have been determined, and we have used classical and quantum atomistic simulations to examine and compare them with geometric cuboctahedral and truncated octahedral structures. A simulated annealing procedure was used both to explore the potential energy surface at different temperatures, and also to assess the effect on catalytic activity that annealing would have on nanoparticles with different geometries and sizes. The differences in response to annealing between the real and geometric nanoparticles are discussed in terms of thermal stability, coordination number and the proportion of optimal binding sites on the surface of the nanoparticles. We find that annealing both experimental and geometric nanoparticles results in structures that appear similar in shape and predicted activity, using oxygen adsorption as a measure. Annealing is predicted to increase the catalytic activity in all cases except the truncated octahedra, where it has the opposite effect. As our simulations have been performed with a classical force field, we also assess its suitability to describe the potential energy of such nanoparticles by comparing with large scale density functional theory calculations.

  6. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem.

    Science.gov (United States)

    Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.

  7. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H; von Schwerin, E; Szepessy, A; Tempone, Raul

    2011-01-01

    . An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates

  8. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.

    Science.gov (United States)

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-06-30

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.

  9. Defect production in simulated cascades: Cascade quenching and short-term annealing

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1983-01-01

    Defect production in displacement cascades in copper has been modeled using the MARLOWE code to generate cascades and the stochastic annealing code ALSOME to simulate cascade quenching and short-term annealing of isolated cascades. Quenching is accomplished by using exaggerated values for defect mobilities and for critical reaction distances in ALSOME for a very short time. The quenched cascades are then short-term annealed with normal parameter values. The quenching parameter values were empirically determined by comparison with results of resistivity measurements. Throughout the collisional, quenching and short-term annealing phases of cascade development, the high energy cascades continue to behave as a collection of independent lower energy lobes. For recoils above about 30 keV the total number of defects and the numbers of free defects scale with the damage energy. As the energy decreases from 30 keV, defect production varies with the changing nature of the cascade configuration, resulting in more defects per unit damage energy. The simulated annealing of a low fluence of interacting cascades revealed an interstitial shielding effect on depleted zones during Stage I recovery. (orig.)

  10. Loading pattern optimization by multi-objective simulated annealing with screening technique

    International Nuclear Information System (INIS)

    Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.

    2006-01-01

    This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)

  11. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    Science.gov (United States)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  12. Comparison of Lasserre's Measure-based Bounds for Polynomial Optimization to Bounds Obtained by Simulated Annealing

    NARCIS (Netherlands)

    de Klerk, Etienne; Laurent, Monique

    We consider the problem of minimizing a continuous function f over a compact set K. We compare the hierarchy of upper bounds proposed by Lasserre in [SIAM J. Optim. 21(3) (2011), pp. 864-885] to bounds that may be obtained from simulated annealing. We show that, when f is a polynomial and K a convex

  13. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-01-01

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented

  14. Inverse simulated annealing: Improvements and application to amorphous InSb

    OpenAIRE

    Los, Jan H.; Gabardi, Silvia; Bernasconi, Marco; Kühne, Thomas D.

    2014-01-01

    An improved inverse simulated annealing method is presented to determine the structure of complex disordered systems from first principles in agreement with available experimental data or desired predetermined target properties. The effectiveness of this method is demonstrated by revisiting the structure of amorphous InSb. The resulting network is mostly tetrahedral and in excellent agreement with available experimental data.

  15. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology of the...

  16. Simulated Annealing Genetic Algorithm Based Schedule Risk Management of IT Outsourcing Project

    Directory of Open Access Journals (Sweden)

    Fuqiang Lu

    2017-01-01

    Full Text Available IT outsourcing is an effective way to enhance the core competitiveness for many enterprises. But the schedule risk of IT outsourcing project may cause enormous economic loss to enterprise. In this paper, the Distributed Decision Making (DDM theory and the principal-agent theory are used to build a model for schedule risk management of IT outsourcing project. In addition, a hybrid algorithm combining simulated annealing (SA and genetic algorithm (GA is designed, namely, simulated annealing genetic algorithm (SAGA. The effect of the proposed model on the schedule risk management problem is analyzed in the simulation experiment. Meanwhile, the simulation results of the three algorithms GA, SA, and SAGA show that SAGA is the most superior one to the other two algorithms in terms of stability and convergence. Consequently, this paper provides the scientific quantitative proposal for the decision maker who needs to manage the schedule risk of IT outsourcing project.

  17. Phase diagram of 2D Hubbard model by simulated annealing mean field approximation

    International Nuclear Information System (INIS)

    Kato, Masaru; Kitagaki, Takashi

    1991-01-01

    In order to investigate the stable magnetic structure of the Hubbard model on a square lattice, we utilize the dynamical simulated annealing method which proposed by R. Car and M. Parrinello. Results of simulations on a 10 x 10 lattice system with 80 electrons under assumption of collinear magnetic structure that the most stable state is incommensurate spin density wave state with periodic domain wall. (orig.)

  18. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    Science.gov (United States)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  19. Resorting the NIST undulator using simulated annealing for field error reduction

    International Nuclear Information System (INIS)

    Denbeaux, Greg; Johnson, Lewis E.; Madey, John M.J.

    2000-01-01

    We have used a simulated annealing algorithm to sort the samarium cobalt blocks and vanadium permendur poles in the hybrid NIST undulator to optimize the spectrum of the emitted light. While simulated annealing has proven highly effective in sorting of the SmCo blocks in pure REC undulators, the reliance on magnetically 'soft' poles operating near saturation to concentrate the flux in hybrid undulators introduces a pair of additional variables - the permeability and saturation induction of the poles - which limit the utility of the assumption of superposition on which most simulated annealing codes rely. Detailed magnetic measurements clearly demonstrated the failure of the superposition principle due to random variations in the permeability in the 'unsorted' NIST undulator. To deal with the issue, we measured both the magnetization of the REC blocks and the permeability of the NIST's integrated vanadium permendur poles, and implemented a sorting criteria which minimized the pole-to-pole variations in permeability to satisfy the criteria for realization of superposition on a nearest-neighbor basis. Though still imperfect, the computed spectrum of the radiation from the re-sorted and annealed NIST undulator is significantly superior to that of the original, unsorted device

  20. The Parameters Optimization of MCR-WPT System Based on the Improved Genetic Simulated Annealing Algorithm

    Directory of Open Access Journals (Sweden)

    Sheng Lu

    2015-01-01

    Full Text Available To solve the problem of parameter selection during the design of magnetically coupled resonant wireless power transmission system (MCR-WPT, this paper proposed an improved genetic simulated annealing algorithm. Firstly, the equivalent circuit of the system is analysis in this study and a nonlinear programming mathematical model is built. Secondly, in place of the penalty function method in the genetic algorithm, the selection strategy based on the distance between individuals is adopted to select individual. In this way, it reduces the excess empirical parameters. Meanwhile, it can improve the convergence rate and the searching ability by calculating crossover probability and mutation probability according to the variance of population’s fitness. At last, the simulated annealing operator is added to increase local search ability of the method. The simulation shows that the improved method can break the limit of the local optimum solution and get the global optimum solution faster. The optimized system can achieve the practical requirements.

  1. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    Science.gov (United States)

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  2. Parameter identification based on modified simulated annealing differential evolution algorithm for giant magnetostrictive actuator

    Science.gov (United States)

    Gao, Xiaohui; Liu, Yongguang

    2018-01-01

    There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA) and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A) model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA) is proposed by taking full advantage of differential evolution algorithm's fast convergence and simulated annealing algorithm's jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.

  3. Atomic scale simulations of arsenic ion implantation and annealing in silicon

    International Nuclear Information System (INIS)

    Caturla, M.J.; Diaz de la Rubia, T.; Jaraiz, M.

    1995-01-01

    We present results of multiple-time-scale simulations of 5, 10 and 15 keV low temperature ion implantation of arsenic on silicon (100), followed by high temperature anneals. The simulations start with a molecular dynamics (MD) calculation of the primary state of damage after 10ps. The results are then coupled to a kinetic Monte Carlo (MC) simulation of bulk defect diffusion and clustering. Dose accumulation is achieved considering that at low temperatures the damage produced in the lattice is stable. After the desired dose is accumulated, the system is annealed at 800 degrees C for several seconds. The results provide information on the evolution for the damage microstructure over macroscopic length and time scales and affords direct comparison to experimental results. We discuss the database of inputs to the MC model and how it affects the diffusion process

  4. Cascade annealing: an overview

    International Nuclear Information System (INIS)

    Doran, D.G.; Schiffgens, J.O.

    1976-04-01

    Concepts and an overview of radiation displacement damage modeling and annealing kinetics are presented. Short-term annealing methodology is described and results of annealing simulations performed on damage cascades generated using the Marlowe and Cascade programs are included. Observations concerning the inconsistencies and inadequacies of current methods are presented along with simulation of high energy cascades and simulation of longer-term annealing

  5. Physical Mapping Using Simulated Annealing and Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg

    2003-01-01

    optimization method when searching for an ordering of the fragments in PM. In this paper, we applied an evolutionary algorithm to the problem, and compared its performance to that of SA and local search on simulated PM data, in order to determine the important factors in finding a good ordering of the segments....... The analysis highlights the importance of a good PM model, a well-correlated fitness function, and high quality hybridization data. We suggest that future work in PM should focus on design of more reliable fitness functions and on developing error-screening algorithms....

  6. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  7. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities

    Directory of Open Access Journals (Sweden)

    Hayder Amer

    2016-06-01

    Full Text Available Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.

  8. Application of simulated annealing for simultaneous retrieval of particle size distribution and refractive index

    International Nuclear Information System (INIS)

    Ma, Lin; Kranendonk, Laura; Cai, Weiwei; Zhao, Yan; Baba, Justin S.

    2009-01-01

    This paper describes the application of the simulated annealing technique for the simultaneous retrieval of particle size distribution and refractive index based on polarization modulated scattering (PMS) measurements. The PMS technique is a well-established method to measure multiple elements of the Mueller scattering matrix. However, the inference of the scatterers properties (e.g., the size distribution function and refractive index) from such measurements involves solving an ill-conditioned inverse problem. In this paper, a new inversion technique was demonstrated to infer particle properties from PMS measurements. The new technique formulated the inverse problem into a minimization problem, which is then solved by the simulated annealing technique. Both numerical and experimental investigation on the new inversion technique was presented in the paper. The results obtained demonstrated the robustness and reliability of the new algorithm, and supported its expanded applications in scientific and technological areas involving particulates/aerosols.

  9. Fast and accurate protein substructure searching with simulated annealing and GPUs

    Directory of Open Access Journals (Sweden)

    Stivala Alex D

    2010-09-01

    Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.

  10. Use of simulated annealing in standardization and optimization of the acerola wine production

    Directory of Open Access Journals (Sweden)

    Sheyla dos Santos Almeida

    2014-06-01

    Full Text Available In this study, seven wine samples were prepared varying the amount of pulp of acerola fruits and the sugar content using the simulated annealing technique to obtain the optimal sensory qualities and cost for the wine produced. S. cerevisiae yeast was used in the fermentation process and the sensory attributes were evaluated using a hedonic scale. Acerola wines were classified as sweet, with 11°GL of alcohol concentration and with aroma, taste, and color characteristics of the acerola fruit. The simulated annealing experiments showed that the best conditions were found at mass ratio between 1/7.5-1/6 and total soluble solids between 28.6-29.0 °Brix, from which the sensory acceptance scores of 6.9, 6.8, and 8.8 were obtained for color, aroma, and flavor, respectively, with a production cost 43-45% lower than the cost of traditional wines commercialized in Brazil.

  11. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing

    International Nuclear Information System (INIS)

    Menin, O.H.; Martinez, A.S.; Costa, A.M.

    2016-01-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. - Highlights: • X-ray spectra reconstruction from attenuation data using generalized simulated annealing. • Algorithm employs a smoothing regularization function, and sets the initial acceptance and visitation temperatures. • Algorithmic is automated by standardizing the terms of the objective function. • Algorithm is compared with classical methods.

  12. PedMine – A simulated annealing algorithm to identify maximally unrelated individuals in population isolates

    OpenAIRE

    Douglas, Julie A.; Sandefur, Conner I.

    2008-01-01

    In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a ge...

  13. Compatible topologies and parameters for NMR structure determination of carbohydrates by simulated annealing

    OpenAIRE

    Feng, Yingang

    2017-01-01

    The use of NMR methods to determine the three-dimensional structures of carbohydrates and glycoproteins is still challenging, in part because of the lack of standard protocols. In order to increase the convenience of structure determination, the topology and parameter files for carbohydrates in the program Crystallography & NMR System (CNS) were investigated and new files were developed to be compatible with the standard simulated annealing protocols for proteins and nucleic acids. Recalculat...

  14. A study on three dimensional layout design by the simulated annealing method

    International Nuclear Information System (INIS)

    Jang, Seung Ho

    2008-01-01

    Modern engineered products are becoming increasingly complicated and most consumers prefer compact designs. Layout design plays an important role in many engineered products. The objective of this study is to suggest a method to apply the simulated annealing method to the arbitrarily shaped three-dimensional component layout design problem. The suggested method not only optimizes the packing density but also satisfies constraint conditions among the components. The algorithm and its implementation as suggested in this paper are extendable to other research objectives

  15. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    Science.gov (United States)

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  16. Defect production in simulated cascades: cascade quenching and short-term annealing

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1982-01-01

    Defect production in high energy displacement cascades has been modeled using the computer code MARLOWE to generate the cascades and the stochastic computer code ALSOME to simulate the cascade quenching and short-term annealing of isolated cascades. The quenching is accomplished by using ALSOME with exaggerated values for defect mobilities and critical reaction distanes for recombination and clustering, which are in effect until the number of defect pairs is equal to the value determined from resistivity experiments at 4K. Then normal mobilities and reaction distances are used during short-term annealing to a point representative of Stage III recovery. Effects of cascade interactions at low fluences are also being investigated. The quenching parameter values were empirically determined for 30 keV cascades. The results agree well with experimental information throughout the range from 1 keV to 100 keV. Even after quenching and short-term annealing the high energy cascades behave as a collection of lower energy subcascades and lobes. Cascades generated in a crystal having thermal displacements were found to be in better agreement with experiments after quenching and annealing than those generated in a non-thermal crystal

  17. Direct comparison of quantum and simulated annealing on a fully connected Ising ferromagnet

    Science.gov (United States)

    Wauters, Matteo M.; Fazio, Rosario; Nishimori, Hidetoshi; Santoro, Giuseppe E.

    2017-08-01

    We compare the performance of quantum annealing (QA, through Schrödinger dynamics) and simulated annealing (SA, through a classical master equation) on the p -spin infinite range ferromagnetic Ising model, by slowly driving the system across its equilibrium, quantum or classical, phase transition. When the phase transition is second order (p =2 , the familiar two-spin Ising interaction) SA shows a remarkable exponential speed-up over QA. For a first-order phase transition (p ≥3 , i.e., with multispin Ising interactions), in contrast, the classical annealing dynamics appears to remain stuck in the disordered phase, while we have clear evidence that QA shows a residual energy which decreases towards zero when the total annealing time τ increases, albeit in a rather slow (logarithmic) fashion. This is one of the rare examples where a limited quantum speedup, a speedup by QA over SA, has been shown to exist by direct solutions of the Schrödinger and master equations in combination with a nonequilibrium Landau-Zener analysis. We also analyze the imaginary-time QA dynamics of the model, finding a 1 /τ2 behavior for all finite values of p , as predicted by the adiabatic theorem of quantum mechanics. The Grover-search limit p (odd )=∞ is also discussed.

  18. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  19. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  20. Identification of exploration strategies for electric power distribution network using simulated annealing; Identificao de estrategias de exploracao de redes de distribuicao de energia electrica utilizando simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Jorge; Saraiva, J. Tome; Leao, Maria Teresa Ponce de [Instituto de Engenharia de Sistemas e Computadores (INESC), Porto (Portugal). E-mail: jpereira@inescn.pt; jsaraiva@inescn.pt; mleao@inescn.pt

    1999-07-01

    This paper presents a model for identification of optimum strategies for electric power distribution networks, considering the aim of minimizing the active power losses. This objective can be attained by modifying the transformer connections or modification of the condenser groups on duty. By the other side, specifications of voltage ranges for each bar and current intensity limits for the branches are admitted, in order to obtain a more realistic the used model. The paper describes the the simulated annealing in order to surpass the mentioned difficulties. The application of the method to the problem resolution allows the identification solutions based on exact models. The application is illustrated with the results obtained by using a IEEE test network and a network based on real distribution with 645 bars.

  1. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  2. Displacement cascades and defect annealing in tungsten, Part II: Object kinetic Monte Carlo simulation of tungsten cascade aging

    Energy Technology Data Exchange (ETDEWEB)

    Nandipati, Giridhar, E-mail: giridhar.nandipati@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA (United States); Setyawan, Wahyu; Heinisch, Howard L. [Pacific Northwest National Laboratory, Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Laboratory, Richland, WA (United States); Department of Physics, University of Washington, Seattle, WA 98195 (United States); Kurtz, Richard J. [Pacific Northwest National Laboratory, Richland, WA (United States); Wirth, Brian D. [University of Tennessee, Knoxville, TN (United States)

    2015-07-15

    The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.

  3. Displacement cascades and defect annealing in tungsten, Part II: Object kinetic Monte Carlo Simulation of Tungsten Cascade Aging

    Energy Technology Data Exchange (ETDEWEB)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2015-07-01

    The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.

  4. Time Simulation of Bone Adaptation

    DEFF Research Database (Denmark)

    Bagge, Mette

    1998-01-01

    The structural adaptation of a three-dimensional finite element model ofthe proximal femur is considered. Presuming the bone possesses the optimalstructure under the given loads, the bone material distribution is foundby minimizing the strain energy averaged over ten load cases with avolume....... The remodeling algorithm is derived directly from theoptimization recurrence formula, and in a time increment the materialdistribution changes towards the optimal structure for the present load case.The speed of remodeling is taken from clinical data.Numerical examples of respectively increasing and reducing...

  5. Annealing effect on thermodynamic and physical properties of mesoporous silicon: A simulation and nitrogen sorption study

    Science.gov (United States)

    Kumar, Pushpendra; Huber, Patrick

    2016-04-01

    Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.

  6. Simulated annealing of displacement cascades in FCC metals. 1. Beeler cascades

    International Nuclear Information System (INIS)

    Doran, D.G.; Burnett, R.A.

    1974-09-01

    An important source of damage to structural materials in fast reactors is the displacement of atoms from normal lattice sites. A high energy neutron may impart sufficient energy to an atom to initiate a displacement cascade consisting of a localized high density of hundreds of interstitials and vacancies. These defects subsequently interact to form clusters and to reduce their density by mutual annihilation. This short term annealing of an isolated cascade has been simulated at high and low temperatures using a correlated random walk model. The cascade representations used were developed by Beeler and the point defect properties were based on the model of γ-iron by Johnson. Low temperature anneals, characterized by no vacancy migration and a 104 site annihilation region (AR), resulted in 49 defect pairs at 20 keV and 11 pairs at 5 keV. High temperature anneals, characterized by both interstitial and vacancy migration and a 32 site AR, resulted in 68 pairs at 20 keV and 18 pairs at 5 keV when no cluster dissociation was permitted; most of the vacancies were in immobile clusters. These high temperature values dropped to 40 and 14 upon dissolution of the vacancy clusters. Parameter studies showed that, at a given temperature, the large AR resulted in about one-half as many defects as the small AR. Cluster size distributions and examples of spatial configurations are included. (U.S.)

  7. Simulated annealing algorithm for solving chambering student-case assignment problem

    Science.gov (United States)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  8. First-order design of geodetic networks using the simulated annealing method

    Science.gov (United States)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  9. Simulated annealing CFAR threshold selection for South African ship detection in ASAR imagery

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2014-07-01

    Full Text Available ALTER CURRENT THRESHOLD PLANE IF CANDIDATE IS BETTER IF CANDIDATE IS WORSE IF (RANDOM NUMBER < BOLTZMANN PROBABILITY) Fig. 3. The iterative procedure of Simulated Annealing. Starting at some initial threshold plane Ti (x, y) each iteration tests... if the new solution T is better than the previous best solution Tb (x, y). A possible “bad” candidate can replace the current best due to the Boltzmann probability. A new threshold plane Tb (x, y) is defined which is mapped to the 2D distribution map...

  10. The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling

    Directory of Open Access Journals (Sweden)

    A. Bonilla-Petriciolet

    2007-03-01

    Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.

  11. Protein structure predictions with Monte Carlo simulated annealing: Case for the β-sheet

    Science.gov (United States)

    Okamoto, Y.; Fukugita, M.; Kawai, H.; Nakazawa, T.

    Work is continued for a prediction of three-dimensional structure of peptides and proteins with Monte Carlo simulated annealing using only a generic energy function and amino acid sequence as input. We report that β-sheet like structure is successfully predicted for a fragment of bovine pancreatic trypsin inhibitor which is known to have the β-sheet structure in nature. Together with the results for α-helix structure reported earlier, this means that a successful prediction can be made, at least at a qualitative level, for two dominant building blocks of proteins, α-helix and β-sheet, from the information of amino acid sequence alone.

  12. Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems

    Science.gov (United States)

    Cruz-Chávez, Marco Antonio

    2015-11-01

    This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.

  13. Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xu Mingji

    2017-01-01

    Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.

  14. Parallel adaptive simulations on unstructured meshes

    International Nuclear Information System (INIS)

    Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A

    2007-01-01

    This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers

  15. Optimization of cladding parameters for resisting corrosion on low carbon steels using simulated annealing algorithm

    Science.gov (United States)

    Balan, A. V.; Shivasankaran, N.; Magibalan, S.

    2018-04-01

    Low carbon steels used in chemical industries are frequently affected by corrosion. Cladding is a surfacing process used for depositing a thick layer of filler metal in a highly corrosive materials to achieve corrosion resistance. Flux cored arc welding (FCAW) is preferred in cladding process due to its augmented efficiency and higher deposition rate. In this cladding process, the effect of corrosion can be minimized by controlling the output responses such as minimizing dilution, penetration and maximizing bead width, reinforcement and ferrite number. This paper deals with the multi-objective optimization of flux cored arc welding responses by controlling the process parameters such as wire feed rate, welding speed, Nozzle to plate distance, welding gun angle for super duplex stainless steel material using simulated annealing technique. Regression equation has been developed and validated using ANOVA technique. The multi-objective optimization of weld bead parameters was carried out using simulated annealing to obtain optimum bead geometry for reducing corrosion. The potentiodynamic polarization test reveals the balanced formation of fine particles of ferrite and autenite content with desensitized nature of the microstructure in the optimized clad bead.

  16. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  17. IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2014-09-01

    Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.

  18. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    International Nuclear Information System (INIS)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-01-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system

  19. Solving the patient zero inverse problem by using generalized simulated annealing

    Science.gov (United States)

    Menin, Olavo H.; Bauch, Chris T.

    2018-01-01

    Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.

  20. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  1. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Science.gov (United States)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  2. Temporary Workforce Planning with Firm Contracts: A Model and a Simulated Annealing Heuristic

    Directory of Open Access Journals (Sweden)

    Muhammad Al-Salamah

    2011-01-01

    Full Text Available The aim of this paper is to introduce a model for temporary staffing when temporary employment is managed by firm contracts and to propose a simulated annealing-based method to solve the model. Temporary employment is a policy frequently used to adjust the working hour capacity to fluctuating demand. Temporary workforce planning models have been unnecessarily simplified to account for only periodic hiring and laying off; a company can review its workforce requirement every period and make hire-fire decisions accordingly, usually with a layoff cost. We present a more realistic temporary workforce planning model that assumes a firm contract between the worker and the company, which can extend to several periods. The model assumes the traditional constraints, such as inventory balance constraints, worker availability, and labor hour mix. The costs are the inventory holding cost, training cost of the temporary workers, and the backorder cost. The mixed integer model developed for this case has been found to be difficult to solve even for small problem sizes; therefore, a simulated annealing algorithm is proposed to solve the mixed integer model. The performance of the SA algorithm is compared with the CPLEX solution.

  3. Prediction of Flood Warning in Taiwan Using Nonlinear SVM with Simulated Annealing Algorithm

    Science.gov (United States)

    Lee, C.

    2013-12-01

    The issue of the floods is important in Taiwan. It is because the narrow and high topography of the island make lots of rivers steep in Taiwan. The tropical depression likes typhoon always causes rivers to flood. Prediction of river flow under the extreme rainfall circumstances is important for government to announce the warning of flood. Every time typhoon passed through Taiwan, there were always floods along some rivers. The warning is classified to three levels according to the warning water levels in Taiwan. The propose of this study is to predict the level of floods warning from the information of precipitation, rainfall duration and slope of riverbed. To classify the level of floods warning by the above-mentioned information and modeling the problems, a machine learning model, nonlinear Support vector machine (SVM), is formulated to classify the level of floods warning. In addition, simulated annealing (SA), a probabilistic heuristic algorithm, is used to determine the optimal parameter of the SVM model. A case study of flooding-trend rivers of different gradients in Taiwan is conducted. The contribution of this SVM model with simulated annealing is capable of making efficient announcement for flood warning and keeping the danger of flood from residents along the rivers.

  4. Lattice Boltzmann simulation of flow and heat transfer in random porous media constructed by simulated annealing algorithm

    International Nuclear Information System (INIS)

    Liu, Minghua; Shi, Yong; Yan, Jiashu; Yan, Yuying

    2017-01-01

    Highlights: • A numerical capability combining the lattice Boltzmann method with simulated annealing algorithm is developed. • Digitized representations of random porous media are constructed using limited but meaningful statistical descriptors. • Pore-scale flow and heat transfer information in random porous media is obtained by the lattice Boltzmann simulation. • The effective properties at the representative elementary volume scale are well specified using appropriate upscale averaging. - Abstract: In this article, the lattice Boltzmann (LB) method for transport phenomena is combined with the simulated annealing (SA) algorithm for digitized porous-medium construction to study flow and heat transfer in random porous media. Importantly, in contrast to previous studies which simplify porous media as arrays of regularly shaped objects or effective pore networks, the LB + SA method in this article can model statistically meaningful random porous structures in irregular morphology, and simulate pore-scale transport processes inside them. Pore-scale isothermal flow and heat conduction in a set of constructed random porous media characterized by statistical descriptors were then simulated through use of the LB + SA method. The corresponding averages over the computational volumes and the related effective transport properties were also computed based on these pore scale numerical results. Good agreement between the numerical results and theoretical predictions or experimental data on the representative elementary volume scale was found. The numerical simulations in this article demonstrate combination of the LB method with the SA algorithm is a viable and powerful numerical strategy for simulating transport phenomena in random porous media in complex geometries.

  5. An Evaluation of the Use of Simulated Annealing to Optimize Thinning Rates for Single Even-Aged Stands

    Directory of Open Access Journals (Sweden)

    Kai Moriguchi

    2015-01-01

    Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.

  6. Optimization of Gamma Knife treatment planning via guided evolutionary simulated annealing

    International Nuclear Information System (INIS)

    Zhang Pengpeng; Dean, David; Metzger, Andrew; Sibata, Claudio

    2001-01-01

    We present a method for generating optimized Gamma Knife trade mark sign (Elekta, Stockholm, Sweden) radiosurgery treatment plans. This semiautomatic method produces a highly conformal shot packing plan for the irradiation of an intracranial tumor. We simulate optimal treatment planning criteria with a probability function that is linked to every voxel in a volumetric (MR or CT) region of interest. This sigmoidal P + parameter models the requirement of conformality (i.e., tumor ablation and normal tissue sparing). After determination of initial radiosurgery treatment parameters, a guided evolutionary simulated annealing (GESA) algorithm is used to find the optimal size, position, and weight for each shot. The three-dimensional GESA algorithm searches the shot parameter space more thoroughly than is possible during manual shot packing and provides one plan that is suitable to the treatment criteria of the attending neurosurgeon and radiation oncologist. The result is a more conformal plan, which also reduces redundancy, and saves treatment administration time

  7. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  8. Adaptive Resolution Simulation of MARTINI Solvents

    NARCIS (Netherlands)

    Zavadlav, Julija; Melo, Manuel N.; Cunha, Ana V.; de Vries, Alex H.; Marrink, Siewert J.; Praprotnik, Matej

    We present adaptive resolution dynamics simulations of aqueous and apolar solvents coarse-grained molecular models that are compatible with the MARTINI force field. As representatives of both classes solvents we have chosen liquid water and butane, respectively, at ambient temperature. The solvent

  9. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine; Algoritmo simulated annealing inteligente aplicado a la optimizacion del iman principal de una maquina de resonancia magnetica de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu

    2001-08-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  10. PENJADWALAN JOB SHOP STATIK DENGAN METODE SIMULATED ANNEALING UNTUK MEMINIMASI WAKTU MAKESPAN

    Directory of Open Access Journals (Sweden)

    Moh.Husen

    2015-10-01

    Full Text Available Penjadwalan bagi perusahaan adalah aspek yang sangat penting, karena penjadwalan merupakan salah satu elemen perencanaan dan pengendalian produksi, sehingga perusahaan dapat mengirim barang sesuai dengan waktu yang telah ditentukan, agar diperoleh waktu total penyelesaian yang minimum. Dalam penelitian ini, penjadwalan menggunakan metode Simulated Annealing (SA dengan bantuan Matlab diharapkan dapat menghasilkan waktu total penyelesaian (makespan lebih cepat dari penjadwalan yang ada pada perusahaan. Metode SA mensimulasikan proses annealing pada pembuatan materi yang terdiri dari butiran Kristal atau logam. Tujuan dari proses ini adalah menghasilkan struktur kristal yang baik dengan menggunakan energi seminimal mungkin. Permasalahan yang dihadapi oleh perusahaan adalah perusahaan belum mempertimbangkan makespan dalam penyelesaian produk dan penjadwalan produksi untuk produk paket satu rumah kos-kosan. Hal ini berdasarkan data produksi yang terjadi keterlambatan dilihat dari waktu penyelesaian (makespan produksi, sehingga perusahaan harus menambah 2-5 hari lagi untuk bisa menyelesaikan keseluruhan produk. Dengan menggunakan metode SA menghasilkan makespan 23 jam, lebih cepat 2 jam dari pada penjadwalan awal.

  11. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine

    International Nuclear Information System (INIS)

    Sanchez Lopez, Hector

    2001-01-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  12. Optimization of permanent-magnet undulator magnets ordering using simulated annealing algorithm

    International Nuclear Information System (INIS)

    Chen Nian; He Duohui; Li Ge; Jia Qika; Zhang Pengfei; Xu Hongliang; Cai Genwang

    2005-01-01

    Pure permanent-magnet undulator consists of many magnets. The unavoidable remanence divergence of these magnets causes the undulator magnetic field error, which will affect the functional mode of the storage ring and the quality of the spontaneous emission spectrum. Optimizing permanent-magnet undulator magnets ordering using simulated annealing algorithm before installing undulator magnets, the first field integral can be reduced to 10 -6 T·m, the second integral to 10 -6 T·m 2 and the peak field error to less than 10 -4 . The optimized results are independent of the initial solution. This paper gives the optimizing process in detail and puts forward a method to quickly calculate the peak field error and field integral according to the magnet remanence. (authors)

  13. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  14. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    Science.gov (United States)

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  15. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    International Nuclear Information System (INIS)

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  16. An improved hybrid topology optimization approach coupling simulated annealing and SIMP (SA-SIMP)

    International Nuclear Information System (INIS)

    Garcia-Lopez, N P; Sanchez-Silva, M; Medaglia, A L; Chateauneuf, A

    2010-01-01

    The Solid Isotropic Material with Penalization (SIMP) methodology has been used extensively due to its versatility and ease of implementation. However, one of its main drawbacks is that resulting topologies exhibit areas of intermediate densities which lack any physical meaning. This paper presents a hybrid methodology which couples simulated annealing and SIMP (SA-SIMP) in order to achieve solutions which are stiffer and predominantly black and white. Under a look-ahead strategy, the algorithm gradually fixes or removes those elements whose density resulting from SIMP is intermediate. Different strategies for selecting and fixing the fractional elements are examined using benchmark examples, which show that topologies resulting from SA-SIMP are more rigid than SIMP and predominantly black and white.

  17. Simulated annealing with restart strategy for the blood pickup routing problem

    Science.gov (United States)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  18. Compatible topologies and parameters for NMR structure determination of carbohydrates by simulated annealing.

    Science.gov (United States)

    Feng, Yingang

    2017-01-01

    The use of NMR methods to determine the three-dimensional structures of carbohydrates and glycoproteins is still challenging, in part because of the lack of standard protocols. In order to increase the convenience of structure determination, the topology and parameter files for carbohydrates in the program Crystallography & NMR System (CNS) were investigated and new files were developed to be compatible with the standard simulated annealing protocols for proteins and nucleic acids. Recalculating the published structures of protein-carbohydrate complexes and glycosylated proteins demonstrates that the results are comparable to the published structures which employed more complex procedures for structure calculation. Integrating the new carbohydrate parameters into the standard structure calculation protocol will facilitate three-dimensional structural study of carbohydrates and glycosylated proteins by NMR spectroscopy.

  19. A Simulated Annealing method to solve a generalized maximal covering location problem

    Directory of Open Access Journals (Sweden)

    M. Saeed Jabalameli

    2011-04-01

    Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.

  20. Improved Genetic and Simulating Annealing Algorithms to Solve the Traveling Salesman Problem Using Constraint Programming

    Directory of Open Access Journals (Sweden)

    M. Abdul-Niby

    2016-04-01

    Full Text Available The Traveling Salesman Problem (TSP is an integer programming problem that falls into the category of NP-Hard problems. As the problem become larger, there is no guarantee that optimal tours will be found within reasonable computation time. Heuristics techniques, like genetic algorithm and simulating annealing, can solve TSP instances with different levels of accuracy. Choosing which algorithm to use in order to get a best solution is still considered as a hard choice. This paper suggests domain reduction as a tool to be combined with any meta-heuristic so that the obtained results will be almost the same. The hybrid approach of combining domain reduction with any meta-heuristic encountered the challenge of choosing an algorithm that matches the TSP instance in order to get the best results.

  1. An efficient simulated annealing algorithm for the redundancy allocation problem with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Chambari, Amirhossain; Najafi, Amir Abbas; Rahmati, Seyed Habib A.; Karimi, Aida

    2013-01-01

    The redundancy allocation problem (RAP) is an important reliability optimization problem. This paper studies a specific RAP in which redundancy strategies are chosen. To do so, the choice of the redundancy strategies among active and cold standby is considered as decision variables. The goal is to select the redundancy strategy, component, and redundancy level for each subsystem such that the system reliability is maximized. Since RAP is a NP-hard problem, we propose an efficient simulated annealing algorithm (SA) to solve it. In addition, to evaluating the performance of the proposed algorithm, it is compared with well-known algorithms in the literature for different test problems. The results of the performance analysis show a relatively satisfactory efficiency of the proposed SA algorithm

  2. REPAIR SHOP JOB SCHEDULING WITH PARALLEL OPERATORS AND MULTIPLE CONSTRAINTS USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    N. Shivasankaran

    2013-04-01

    Full Text Available Scheduling problems are generally treated as NP andash; complete combinatorial optimization problems which is a multi-objective and multi constraint one. Repair shop Job sequencing and operator allocation is one such NP andash; complete problem. For such problems, an efficient technique is required that explores a wide range of solution space. This paper deals with Simulated Annealing Technique, a Meta - heuristic to solve the complex Car Sequencing and Operator Allocation problem in a car repair shop. The algorithm is tested with several constraint settings and the solution quality exceeds the results reported in the literature with high convergence speed and accuracy. This algorithm could be considered as quite effective while other heuristic routine fails.

  3. Two-Dimensional IIR Filter Design Using Simulated Annealing Based Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Supriya Dhabal

    2014-01-01

    Full Text Available We present a novel hybrid algorithm based on particle swarm optimization (PSO and simulated annealing (SA for the design of two-dimensional recursive digital filters. The proposed method, known as SA-PSO, integrates the global search ability of PSO with the local search ability of SA and offsets the weakness of each other. The acceptance criterion of Metropolis is included in the basic algorithm of PSO to increase the swarm’s diversity by accepting sometimes weaker solutions also. The experimental results reveal that the performance of the optimal filter designed by the proposed SA-PSO method is improved. Further, the convergence behavior as well as optimization accuracy of proposed method has been improved significantly and computational time is also reduced. In addition, the proposed SA-PSO method also produces the best optimal solution with lower mean and variance which indicates that the algorithm can be used more efficiently in realizing two-dimensional digital filters.

  4. Adaption of core simulations to detector readings

    International Nuclear Information System (INIS)

    Lindahl, S.Oe.

    1985-05-01

    The shortcomings of the conventional core supervision methods are briefly discussed. A new strategy for core surveillance is proposed The strategy is based on a combination of analytical evaluation of detailed core power and adaption of these to detector measurements. The adaption is carried out 1) each time the simulator is executed by use of averaged detector readings and 2) once a year (approximately) in which case the coefficients of the simulator's equations are overviewed. In the yearly overview, calculations are tuned to measurements (TIP, γ-scannings, k-eff) by parameter optimization or by inversion of the diffusion equation. The proposed strategy is believed to increase the accuracy of the core surveillance, to yield improved thermal margins, to increase the accuracy of core predictions and design calculations, and to lessen the dependence of core surveillance on the detector equipment. (author)

  5. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    Science.gov (United States)

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3

  6. An evolutionary programming based simulated annealing method for solving the unit commitment problem

    Energy Technology Data Exchange (ETDEWEB)

    Christober Asir Rajan, C. [Department of EEE, Pondicherry Engineering College, Pondicherry 605014 (India); Mohan, M.R. [Department of EEE, Anna University, Chennai 600 025 (India)

    2007-09-15

    This paper presents a new approach to solve the short-term unit commitment problem using an evolutionary programming based simulated annealing method. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Evolutionary programming, which happens to be a global optimisation technique for solving unit commitment Problem, operates on a system, which is designed to encode each unit's operating schedule with regard to its minimum up/down time. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status (''flat start''). Here the parents are obtained from a pre-defined set of solution's, i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. The best population is selected by evolutionary strategy. The Neyveli Thermal Power Station (NTPS) Unit-II in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different power systems consists of 10, 26, 34 generating units. Numerical results are shown comparing the cost solutions and computation time obtained by using the Evolutionary Programming method and other conventional methods like Dynamic Programming, Lagrangian Relaxation and Simulated Annealing and Tabu Search in reaching proper unit commitment. (author)

  7. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    Science.gov (United States)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that

  8. Adaptive resolution simulation of salt solutions

    International Nuclear Information System (INIS)

    Bevc, Staš; Praprotnik, Matej; Junghans, Christoph; Kremer, Kurt

    2013-01-01

    We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water–water and water–ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt. (paper)

  9. Elemental thin film depth profiles by ion beam analysis using simulated annealing - a new tool

    International Nuclear Information System (INIS)

    Jeynes, C; Barradas, N P; Marriott, P K; Boudreault, G; Jenkin, M; Wendler, E; Webb, R P

    2003-01-01

    Rutherford backscattering spectrometry (RBS) and related techniques have long been used to determine the elemental depth profiles in films a few nanometres to a few microns thick. However, although obtaining spectra is very easy, solving the inverse problem of extracting the depth profiles from the spectra is not possible analytically except for special cases. It is because these special cases include important classes of samples, and because skilled analysts are adept at extracting useful qualitative information from the data, that ion beam analysis is still an important technique. We have recently solved this inverse problem using the simulated annealing algorithm. We have implemented the solution in the 'IBA DataFurnace' code, which has been developed into a very versatile and general new software tool that analysts can now use to rapidly extract quantitative accurate depth profiles from real samples on an industrial scale. We review the features, applicability and validation of this new code together with other approaches to handling IBA (ion beam analysis) data, with particular attention being given to determining both the absolute accuracy of the depth profiles and statistically accurate error estimates. We include examples of analyses using RBS, non-Rutherford elastic scattering, elastic recoil detection and non-resonant nuclear reactions. High depth resolution and the use of multiple techniques simultaneously are both discussed. There is usually systematic ambiguity in IBA data and Butler's example of ambiguity (1990 Nucl. Instrum. Methods B 45 160-5) is reanalysed. Analyses are shown: of evaporated, sputtered, oxidized, ion implanted, ion beam mixed and annealed materials; of semiconductors, optical and magnetic multilayers, superconductors, tribological films and metals; and of oxides on Si, mixed metal silicides, boron nitride, GaN, SiC, mixed metal oxides, YBCO and polymers. (topical review)

  10. Reconstruction of X-rays spectra of clinical linear accelerators using the generalized simulated annealing method

    International Nuclear Information System (INIS)

    Manrique, John Peter O.; Costa, Alessandro M.

    2016-01-01

    The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)

  11. A study of inverse planning by simulated annealing for photon beams modulated by a multileaf collimator

    International Nuclear Information System (INIS)

    Grant, Walter; Carol, Mark; Geis, Paul; Boyer, Arthur L.

    1995-01-01

    Purpose/Objective: To demonstrate the feasibility of inverse planning for multiple fixed-field conformal therapy with a prototype simulated annealing technique and to deliver the treatment plan with an engineering prototype dynamic multileaf collimator. Methods and Materials: A version of the NOMOS inverse-planning algorithm was used to compute weighting distributions over the areas of multiple fixed-gantry fields. The algorithm uses simulated annealing and a cost function based on physical dose. The algorithm is a modification of a NOMOS Peacock planning implementation being used clinically. The computed weighting distributions represented the relative intensities over small 0.5 cm x 1.0 cm areas of the fields. The inverse planning was carried out using a Sun Model 20 computer using four processors. Between five and nine fixed-gantry beams were used in the plans. The weighting distributions were rendered into leaf-setting sequences using an algorithm developed for use with a Varian experimental dynamic-multileaf collimator. The sequences were saved as computer files in a format that was used to drive the Varian control system. X-ray fields having 6-MV and 18-MV energies were planned and delivered using tumor target and sensitive structure volumes segmented from clinical CT scans. Results: The resulting beam-modulation sequences could be loaded into the accelerator control systems and initiated. Each fixed-gantry angle beam was delivered in 30 s to 50 s. The resulting dose distributions were measured in quasi-anatomical phantoms using film. Dose distributions that could achieve significant tissue-sparing were demonstrated. There was good agreement between the delivered dose distributions and the planned distributions. Conclusion: The prototype inverse-planning system under development by NOMOS can be integrated with the prototype dynamic-delivery system being developed by Varian Associates. Should these commercial entities chose to offer compatible FDA

  12. Improvement of the matrix effect compensation in active neutron measurement by simulated annealing algorithm (June 2009)

    International Nuclear Information System (INIS)

    Raoux, A. C.; Loridon, J.; Mariani, A.; Passard, C.

    2009-01-01

    Active neutron measurements such as the Differential Die-Away (DDA) technique involving pulsed neutron generator, are widely applied to determine the fissile content of waste packages. Unfortunately, the main drawback of such techniques is coming from the lack of knowledge of the waste matrix composition. Thus, the matrix effect correction for the DDA measurement is an essential improvement in the field of fissile material content determination. Different solutions have been developed to compensate the effect of the matrix on the neutron measurement interpretation. In this context, this paper describes an innovative matrix correction method we have developed with the goal of increasing the accuracy of the matrix effect correction and reducing the measurement time. The implementation of this method is based on the analysis of the raw signal with an optimisation algorithm called the simulated annealing algorithm. This algorithm needs a reference data base of Multi-Channel Scaling (MCS) spectra, to fit the raw signal. The construction of the MCS library involves a learning phase to define and acquire the DDA signals. This database has been provided by a set of active signals from experimental matrices (mock-up waste drums of 118 litres) recorded in a specific device dedicated to neutron measurement research and development of the Nuclear Measurement Laboratory of CEA-Cadarache, called PROMETHEE 6. The simulated annealing algorithm is applied to make use of the effect of the matrices on the total active signal of DDA measurement. Furthermore, as this algorithm is directly applied to the raw active signal, it is very useful when active background contributions can not be easily estimated and removed. Most of the cases tested during this work which represents the feasibility phase of the method, are within a 4% agreement interval with the expected experimental value. Moreover, one can notice that without any compensation of the matrix effect, the classical DDA prompt

  13. Improvement of the matrix effect compensation in active neutron measurement by simulated annealing algorithm (June 2009)

    Energy Technology Data Exchange (ETDEWEB)

    Raoux, A. C.; Loridon, J.; Mariani, A.; Passard, C. [French Atomic Energy Commission, DEN, Cadarache, F-3108 Saint-Paul-Lez-Durance (France)

    2009-07-01

    Active neutron measurements such as the Differential Die-Away (DDA) technique involving pulsed neutron generator, are widely applied to determine the fissile content of waste packages. Unfortunately, the main drawback of such techniques is coming from the lack of knowledge of the waste matrix composition. Thus, the matrix effect correction for the DDA measurement is an essential improvement in the field of fissile material content determination. Different solutions have been developed to compensate the effect of the matrix on the neutron measurement interpretation. In this context, this paper describes an innovative matrix correction method we have developed with the goal of increasing the accuracy of the matrix effect correction and reducing the measurement time. The implementation of this method is based on the analysis of the raw signal with an optimisation algorithm called the simulated annealing algorithm. This algorithm needs a reference data base of Multi-Channel Scaling (MCS) spectra, to fit the raw signal. The construction of the MCS library involves a learning phase to define and acquire the DDA signals. This database has been provided by a set of active signals from experimental matrices (mock-up waste drums of 118 litres) recorded in a specific device dedicated to neutron measurement research and development of the Nuclear Measurement Laboratory of CEA-Cadarache, called PROMETHEE 6. The simulated annealing algorithm is applied to make use of the effect of the matrices on the total active signal of DDA measurement. Furthermore, as this algorithm is directly applied to the raw active signal, it is very useful when active background contributions can not be easily estimated and removed. Most of the cases tested during this work which represents the feasibility phase of the method, are within a 4% agreement interval with the expected experimental value. Moreover, one can notice that without any compensation of the matrix effect, the classical DDA prompt

  14. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  15. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  16. Kinetic Monte Carlo simulation of nanostructural evolution under post-irradiation annealing in dilute FeMnNi

    Energy Technology Data Exchange (ETDEWEB)

    Chiapetto, M. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium); Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Becquart, C.S. [Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Domain, C. [EDF R and D, Departement Materiaux et Mecanique des Composants, Les Renardieres, Moret sur Loing (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium)

    2015-01-01

    Post-irradiation annealing experiments are often used to obtain clearer information on the nature of defects produced by irradiation. However, their interpretation is not always straightforward without the support of physical models. We apply here a physically-based set of parameters for object kinetic Monte Carlo (OKMC) simulations of the nanostructural evolution of FeMnNi alloys under irradiation to the simulation of their post-irradiation isochronal annealing, from 290 to 600 C. The model adopts a ''grey alloy'' scheme, i.e. the solute atoms are not introduced explicitly, only their effect on the properties of point-defect clusters is. Namely, it is assumed that both vacancy and SIA clusters are significantly slowed down by the solutes. The slowing down increases with size until the clusters become immobile. Specifically, the slowing down of SIA clusters by Mn and Ni can be justified in terms of the interaction between these atoms and crowdions in Fe. The results of the model compare quantitatively well with post-irradiation isochronal annealing experimental data, providing clear insight into the mechanisms that determine the disappearance or re-arrangement of defects as functions of annealing time and temperature. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  17. Simulated annealing (SA to vehicle routing problems with soft time windows

    Directory of Open Access Journals (Sweden)

    Suphan Sodsoon

    2014-12-01

    Full Text Available The researcher has applied and develops the meta-heuristics method to solve Vehicle Routing Problems with Soft Time Windows (VRPSTW. For this case there was only one depot, multi customers which each generally sparse either or demand was different though perceived number of demand and specific period of time to receive them. The Operation Research was representative combinatorial optimization problems and is known to be NP-hard. In this research algorithm, use Simulated Annealing (SA to determine the optimum solutions which rapidly time solving. After developed the algorithms, apply them to examine the factors and the optimum extended time windows and test these factors with vehicle problem routing under specific time windows by Solomon in OR-Library in case of maximum 25 customers. Meanwhile, 6 problems are including of C101, C102, R101, R102, RC101 and RC102 respectively. The result shows the optimum extended time windows at level of 50%. At last, after comparison these answers with the case of vehicle problem routing under specific time windows and flexible time windows, found that percentage errors on number of vehicles approximately by -28.57% and percentage errors on distances approximately by -28.57% which this algorithm spent average processing time on 45.5 sec/problems.

  18. Multi-Objective Optimization for Pure Permanent-Magnet Undulator Magnets Ordering Using Modified Simulated Annealing

    CERN Document Server

    Chen Nian; Li, Ge

    2004-01-01

    Undulator field errors influence the electron beam trajectories and lower the radiation quality. Angular deflection of electron beam is determined by first field integral, orbital displacement of electron beam is determined by second field integral and radiation quality can be evaluated by rms field error or phase error. Appropriate ordering of magnets can greatly reduce the errors. We apply a modified simulated annealing algorithm to this multi-objective optimization problem, taking first field integral, second field integral and rms field error as objective functions. Undulator with small field errors can be designed by this method within a reasonable calculation time even for the case of hundreds of magnets (first field integral reduced to 10-6T·m, second integral to 10-6T·m2 and rms field error to 0.01%). Thus, the field correction after assembling of undulator will be greatly simplified. This paper gives the optimizing process in detail and puts forward a new method to quickly calculate the rms field e...

  19. Simulated Annealing-Based Ant Colony Algorithm for Tugboat Scheduling Optimization

    Directory of Open Access Journals (Sweden)

    Qi Xu

    2012-01-01

    Full Text Available As the “first service station” for ships in the whole port logistics system, the tugboat operation system is one of the most important systems in port logistics. This paper formulated the tugboat scheduling problem as a multiprocessor task scheduling problem (MTSP after analyzing the characteristics of tugboat operation. The model considers factors of multianchorage bases, different operation modes, and three stages of operations (berthing/shifting-berth/unberthing. The objective is to minimize the total operation times for all tugboats in a port. A hybrid simulated annealing-based ant colony algorithm is proposed to solve the addressed problem. By the numerical experiments without the shifting-berth operation, the effectiveness was verified, and the fact that more effective sailing may be possible if tugboats return to the anchorage base timely was pointed out; by the experiments with the shifting-berth operation, one can see that the objective is most sensitive to the proportion of the shifting-berth operation, influenced slightly by the tugboat deployment scheme, and not sensitive to the handling operation times.

  20. Study on the mechanism and efficiency of simulated annealing using an LP optimization benchmark problem - 113

    International Nuclear Information System (INIS)

    Qianqian, Li; Xiaofeng, Jiang; Shaohong, Zhang

    2010-01-01

    Simulated Annealing Algorithm (SAA) for solving combinatorial optimization problems is a popular method for loading pattern optimization. The main purpose of this paper is to understand the underlying search mechanism of SAA and to study its efficiency. In this study, a general SAA that employs random pair exchange of fuel assemblies to search for the optimum fuel Loading Pattern (LP) is applied to an exhaustively searched LP optimization benchmark problem. All the possible LPs of the benchmark problem have been enumerated and evaluated via the use of the very fast and accurate Hybrid Harmonics and Linear Perturbation (HHLP) method, such that the mechanism of SA for LP optimization can be explicitly analyzed and its search efficiency evaluated. The generic core geometry itself dictates that only a small number LPs can be generated by performing random single pair exchanges and that the LPs are necessarily mostly similar to the initial LP. This phase space effect turns out to be the basic mechanism in SAA that can explain its efficiency and good local search ability. A measure of search efficiency is introduced which shows that the stochastic nature of SAA greatly influences the variability of its search efficiency. It is also found that using fuel assembly k-infinity distribution as a technique to filter the LPs can significantly enhance the SAA search efficiency. (authors)

  1. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  2. An interactive system for creating object models from range data based on simulated annealing

    International Nuclear Information System (INIS)

    Hoff, W.A.; Hood, F.W.; King, R.H.

    1997-01-01

    In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy

  3. Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Khanh Doan V.K.

    2014-06-01

    Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.

  4. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    Science.gov (United States)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  5. Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra

    Science.gov (United States)

    Ryden, N.; Park, C.B.

    2006-01-01

    The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.

  6. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  7. A proposal simulated annealing algorithm for proportional parallel flow shops with separated setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2014-08-01

    Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.

  8. Crosshole Tomography, Waveform Inversion, and Anisotropy: A Combined Approach Using Simulated Annealing

    Science.gov (United States)

    Afanasiev, M.; Pratt, R. G.; Kamei, R.; McDowell, G.

    2012-12-01

    Crosshole seismic tomography has been used by Vale to provide geophysical images of mineralized massive sulfides in the Eastern Deeps deposit at Voisey's Bay, Labrador, Canada. To date, these data have been processed using traveltime tomography, and we seek to improve the resolution of these images by applying acoustic Waveform Tomography. Due to the computational cost of acoustic waveform modelling, local descent algorithms are employed in Waveform Tomography; due to non-linearity an initial model is required which predicts first-arrival traveltimes to within a half-cycle of the lowest frequency used. Because seismic velocity anisotropy can be significant in hardrock settings, the initial model must quantify the anisotropy in order to meet the half-cycle criterion. In our case study, significant velocity contrasts between the target massive sulfides and the surrounding country rock led to difficulties in generating an accurate anisotropy model through traveltime tomography, and our starting model for Waveform Tomography failed the half-cycle criterion at large offsets. We formulate a new, semi-global approach for finding the best-fit 1-D elliptical anisotropy model using simulated annealing. Through random perturbations to Thompson's ɛ parameter, we explore the L2 norm of the frequency-domain phase residuals in the space of potential anisotropy models: If a perturbation decreases the residuals, it is always accepted, but if a perturbation increases the residuals, it is accepted with the probability P = exp(-(Ei-E)/T). This is the Metropolis criterion, where Ei is the value of the residuals at the current iteration, E is the value of the residuals for the previously accepted model, and T is a probability control parameter, which is decreased over the course of the simulation via a preselected cooling schedule. Convergence to the global minimum of the residuals is guaranteed only for infinitely slow cooling, but in practice good results are obtained from a variety

  9. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    Science.gov (United States)

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  10. Solving a multi-objective manufacturing cell scheduling problem with the consideration of warehouses using a simulated annealing based procedure

    Directory of Open Access Journals (Sweden)

    Adrián A. Toncovich

    2019-01-01

    Full Text Available The competition manufacturing companies face has driven the development of novel and efficient methods that enhance the decision making process. In this work, a specific flow shop scheduling problem of practical interest in the industry is presented and formalized using a mathematical programming model. The problem considers a manufacturing system arranged as a work cell that takes into account the transport operations of raw material and final products between the manufacturing cell and warehouses. For solving this problem, we present a multiobjective metaheuristic strategy based on simulated annealing, the Pareto Archived Simulated Annealing (PASA. We tested this strategy on two kinds of benchmark problem sets proposed by the authors. The first group is composed by small-sized problems. On these tests, PASA was able to obtain optimal or near-optimal solutions in significantly short computing times. In order to complete the analysis, we compared these results to the exact Pareto front of the instances obtained with augmented ε-constraint method. Then, we also tested the algorithm in a set of larger problems to evaluate its performance in more extensive search spaces. We performed this assessment through an analysis of the hypervolume metric. Both sets of tests showed the competitiveness of the Pareto Archived Simulated Annealing to efficiently solve this problem and obtain good quality solutions while using reasonable computational resources.

  11. WEAR PERFORMANCE OPTIMIZATION OF SILICON NITRIDE USING GENETIC AND SIMULATED ANNEALING ALGORITHM

    Directory of Open Access Journals (Sweden)

    SACHIN GHALME

    2017-12-01

    Full Text Available Replacing damaged joint with the suitable alternative material is a prime requirement in a patient who has arthritis. Generation of wear particles in the artificial joint during action or movement is a serious issue and leads to aseptic loosening of joint. Research in the field of bio-tribology is trying to evaluate materials with minimum wear volume loss so as to extend joint life. Silicon nitride (Si3N4 is non-oxide ceramic suggested as a new alternative for hip/knee joint replacement. Hexagonal Boron Nitride (hBN is recommended as a solid additive lubricant to improve the wear performance of Si3N4 . In this paper, an attempt has been made to evaluate the optimum combination of load and % volume of hBN in Si3N4 to minimize wear volume loss (WVL. The experiments were conducted according to Design of Experiments (DoE – Taguchi method and a mathematical model is developed. Further, this model is processed with Genetic Algorithm (GA and Simulated Annealing (SA to find out the optimum percentage of hBN in Si3N4 to minimize wear volume loss against Alumina (Al2O3 counterface. Taguchi method presents 15 N load and 8% volume of hBN to minimize WVL of Si3N4 . While GA and SA optimization offer 11.08 N load, 12.115% volume of hBN and 11.0789 N load, 12.128% volume of hBN respectively to minimize WVL in Si3N4. .

  12. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  13. A restraint molecular dynamics and simulated annealing approach for protein homology modeling utilizing mean angles

    Directory of Open Access Journals (Sweden)

    Maurer Till

    2005-04-01

    Full Text Available Abstract Background We have developed the program PERMOL for semi-automated homology modeling of proteins. It is based on restrained molecular dynamics using a simulated annealing protocol in torsion angle space. As main restraints defining the optimal local geometry of the structure weighted mean dihedral angles and their standard deviations are used which are calculated with an algorithm described earlier by Döker et al. (1999, BBRC, 257, 348–350. The overall long-range contacts are established via a small number of distance restraints between atoms involved in hydrogen bonds and backbone atoms of conserved residues. Employing the restraints generated by PERMOL three-dimensional structures are obtained using standard molecular dynamics programs such as DYANA or CNS. Results To test this modeling approach it has been used for predicting the structure of the histidine-containing phosphocarrier protein HPr from E. coli and the structure of the human peroxisome proliferator activated receptor γ (Ppar γ. The divergence between the modeled HPr and the previously determined X-ray structure was comparable to the divergence between the X-ray structure and the published NMR structure. The modeled structure of Ppar γ was also very close to the previously solved X-ray structure with an RMSD of 0.262 nm for the backbone atoms. Conclusion In summary, we present a new method for homology modeling capable of producing high-quality structure models. An advantage of the method is that it can be used in combination with incomplete NMR data to obtain reasonable structure models in accordance with the experimental data.

  14. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    Science.gov (United States)

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  15. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  16. Optimization of a hydrometric network extension using specific flow, kriging and simulated annealing

    Science.gov (United States)

    Chebbi, Afef; Kebaili Bargaoui, Zoubeida; Abid, Nesrine; da Conceição Cunha, Maria

    2017-12-01

    In hydrometric stations, water levels are continuously observed and discharge rating curves are constantly updated to achieve accurate river levels and discharge observations. An adequate spatial distribution of hydrological gauging stations presents a lot of interest in linkage with the river regime characterization, water infrastructures design, water resources management and ecological survey. Due to the increase of riverside population and the associated flood risk, hydrological networks constantly need to be developed. This paper suggests taking advantage of kriging approaches to improve the design of a hydrometric network. The context deals with the application of an optimization approach using ordinary kriging and simulated annealing (SA) in order to identify the best locations to install new hydrometric gauges. The task at hand is to extend an existing hydrometric network in order to estimate, at ungauged sites, the average specific annual discharge which is a key basin descriptor. This methodology is developed for the hydrometric network of the transboundary Medjerda River in the North of Tunisia. A Geographic Information System (GIS) is adopted to delineate basin limits and centroids. The latter are adopted to assign the location of basins in kriging development. Scenarios where the size of an existing 12 stations network is alternatively increased by 1, 2, 3, 4 and 5 new station(s) are investigated using geo-regression and minimization of the variance of kriging errors. The analysis of the optimized locations from a scenario to another shows a perfect conformity with respect to the location of the new sites. The new locations insure a better spatial coverage of the study area as seen with the increase of both the average and the maximum of inter-station distances after optimization. The optimization procedure selects the basins that insure the shifting of the mean drainage area towards higher specific discharges.

  17. Electrode Materials, Thermal Annealing Sequences, and Lateral/Vertical Phase Separation of Polymer Solar Cells from Multiscale Molecular Simulations

    KAUST Repository

    Lee, Cheng-Kuang

    2014-12-10

    © 2014 American Chemical Society. The nanomorphologies of the bulk heterojunction (BHJ) layer of polymer solar cells are extremely sensitive to the electrode materials and thermal annealing conditions. In this work, the correlations of electrode materials, thermal annealing sequences, and resultant BHJ nanomorphological details of P3HT:PCBM BHJ polymer solar cell are studied by a series of large-scale, coarse-grained (CG) molecular simulations of system comprised of PEDOT:PSS/P3HT:PCBM/Al layers. Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link between morphology and processing conditions. Our analysis indicates that vertical phase segregation of P3HT:PCBM blend strongly depends on the electrode material and thermal annealing schedule. A thin P3HT-rich film is formed on the top, regardless of bottom electrode material, when the BHJ layer is exposed to the free surface during thermal annealing. In addition, preferential segregation of P3HT chains and PCBM molecules toward PEDOT:PSS and Al electrodes, respectively, is observed. Detailed morphology analysis indicated that, surprisingly, vertical phase segregation does not affect the connectivity of donor/acceptor domains with respective electrodes. However, the formation of P3HT/PCBM depletion zones next to the P3HT/PCBM-rich zones can be a potential bottleneck for electron/hole transport due to increase in transport pathway length. Analysis in terms of fraction of intra- and interchain charge transports revealed that processing schedule affects the average vertical orientation of polymer chains, which may be crucial for enhanced charge transport, nongeminate recombination, and charge collection. The present study establishes a more detailed link between processing and morphology by combining multiscale molecular

  18. Numerical and experimental simulation of mechanical and microstructural transformations in Batch annealing steels

    International Nuclear Information System (INIS)

    Monsalve, A.; Artigas, A.; Celentano, D.; Melendez, F.

    2004-01-01

    The heating and cooling curves during batch annealing process of low carbon steel have been modeled using the finite element technique. This has allowed to predict the transient thermal profile for every point of the annealed coils, particularly for the hottest and coldest ones. Through experimental measurements, the results have been adequately validated since a good agreement has been found between experimental values and those predicted by the model. Moreover, an Avrami recrystallization model. Moreover, and Avrami recrystallization model has been coupled to this thermal balance computation. Interrupted annealing experiments have been made by measuring the recrystallized fraction on the extreme points of the coil foe different times. These data gave the possibility to validate the developed recrystallization model through a reasonably good numerical-experimental fittings. (Author) 6 refs

  19. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement with measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem

  20. The Durham Adaptive Optics Simulation Platform (DASP): Current status

    OpenAIRE

    Basden, Alastair; Bharmal, Nazim; Jenkins, David; Morris, Timothy; Osborn, James; Jia, Peng; Staykov, Lazar

    2018-01-01

    The Durham Adaptive Optics Simulation Platform (DASP) is a Monte-Carlo modelling tool used for the simulation of astronomical and solar adaptive optics systems. In recent years, this tool has been used to predict the expected performance of the forthcoming extremely large telescope adaptive optics systems, and has seen the addition of several modules with new features, including Fresnel optics propagation and extended object wavefront sensing. Here, we provide an overview of the features of D...

  1. PKA spectral effects on subcascade structures and free defect survival ratio as estimated by cascade-annealing computer simulation

    International Nuclear Information System (INIS)

    Muroga, Takeo

    1990-01-01

    The free defect survival ratio is calculated by ''cascade-annealing'' computer simulation using the MARLOWE and modified DAIQUIRI codes in various cases of Primary Knock-on Atom (PKA) spectra. The number of subcascades is calculated by ''cut-off'' calculation using MARLOWE. The adequacy of these methods is checked by comparing the results with experiments (surface segregation measurements and Transmission Electron Microscope cascade defect observations). The correlation using the weighted average recoil energy as a parameter shows that the saturation of the free defect survival ratio at high PKA energies has a close relation to the cascade splitting into subcascades. (author)

  2. Characterisation of amorphous silicon alloys by RBS/ERD with self consistent data analysis using simulated annealing

    International Nuclear Information System (INIS)

    Barradas, N.P.; Wendler, E.; Jeynes, C.; Summers, S.; Reehal, H.S.; Summers, S.

    1999-01-01

    Full text: Hydrogenated amorphous silicon films are deposited by CVD onto insulating (silica) substrates for the fabrication of solar cells. 1.5MeV 4 He ERD/RBS is applied to the films, and a self consistent depth profile of Si and H using the simulated annealing (SA) algorithm was obtained for each sample. The analytical procedure is described in detail, and the confidence limits of the profiles are obtained using the Markov Chain Monte Carlo method which is a natural extension of the SA algorithm. We show how the results are of great benefit to the growers

  3. A New Heuristic Providing an Effective Initial Solution for a Simulated Annealing approach to Energy Resource Scheduling in Smart Grids

    DEFF Research Database (Denmark)

    Sousa, Tiago M; Morais, Hugo; Castro, R.

    2014-01-01

    scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...... to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach...

  4. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    Science.gov (United States)

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  5. A market based active/reactive dispatch including transformer taps and reactor and capacitor banks using Simulated Annealing

    International Nuclear Information System (INIS)

    Gomes, Mario Helder; Saraiva, Joao Tome

    2009-01-01

    This paper describes an optimization model to be used by System Operators in order to validate the economic schedules obtained by Market Operators together with the injections from Bilateral Contracts. These studies will be performed off-line in the day before operation and the developed model is based on adjustment bids submitted by generators and loads and it is used by System Operators if that is necessary to enforce technical or security constraints. This model corresponds to an enhancement of an approach described in a previous paper and it now includes discrete components as transformer taps and reactor and capacitor banks. The resulting mixed integer formulation is solved using Simulated Annealing, a well known metaheuristic specially suited for combinatorial problems. Once the Simulated Annealing converges and the values of the discrete variables are fixed, the resulting non-linear continuous problem is solved using Sequential Linear Programming to get the final solution. The developed model corresponds to an AC version, it includes constraints related with the capability diagram of synchronous generators and variables allowing the computation of the active power required to balance active losses. Finally, the paper includes a Case Study based on the IEEE 118 bus system to illustrate the results that it is possible to obtain and their interest. (author)

  6. Multi–criteria evaluation and simulated annealing for delimiting high priority habitats of Alectoris chukar and Phasianus colchicus in Iran

    Directory of Open Access Journals (Sweden)

    Momeni Dehaghi, I.

    2018-01-01

    Full Text Available Habitat degradation and hunting are among the most important causes of population decline for Alectoris chukar and Phasianus colchicus, two of the most threatened game species in the Golestan Province of Iran. Limited data on distribution and location of high–quality habitats for the two species make conservation efforts more difficult in the province. We used multi–criteria evaluation (MCE as a coarse–filter approach to refine the general distribution areas into habitat suitability maps for the species. We then used these maps as input to simulated annealing as a heuristic algorithm through Marxan in order to prioritize areas for conservation of the two species. To find the optimal solution, we tested various boundary length modifier (BLM values in the simulated annealing process. Our results showed that the MCE approach was useful to refine general habitat maps. Assessment of the selected reserves confirmed the suitability of the selected areas (mainly neighboring the current reserves making their management easier and more feasible. The total area of the selected reserves was about 476 km2. As current reserves of the Golestan Province represent only 23 % of the optimal area, further protected areas should be considered to efficiently conserve these two species.

  7. Simulated Annealing-based Optimal Proportional-Integral-Derivative (PID) Controller Design: A Case Study on Nonlinear Quadcopter Dynamics

    Science.gov (United States)

    Nemirsky, Kristofer Kevin

    In this thesis, the history and evolution of rotor aircraft with simulated annealing-based PID application were reviewed and quadcopter dynamics are presented. The dynamics of a quadcopter were then modeled, analyzed, and linearized. A cascaded loop architecture with PID controllers was used to stabilize the plant dynamics, which was improved upon through the application of simulated annealing (SA). A Simulink model was developed to test the controllers and verify the functionality of the proposed control system design. In addition, the data that the Simulink model provided were compared with flight data to present the validity of derived dynamics as a proper mathematical model representing the true dynamics of the quadcopter system. Then, the SA-based global optimization procedure was applied to obtain optimized PID parameters. It was observed that the tuned gains through the SA algorithm produced a better performing PID controller than the original manually tuned one. Next, we investigated the uncertain dynamics of the quadcopter setup. After adding uncertainty to the gyroscopic effects associated with pitch-and-roll rate dynamics, the controllers were shown to be robust against the added uncertainty. A discussion follows to summarize SA-based algorithm PID controller design and performance outcomes. Lastly, future work on SA application on multi-input-multi-output (MIMO) systems is briefly discussed.

  8. A dynamic programming–enhanced simulated annealing algorithm for solving bi-objective cell formation problem with duplicate machines

    Directory of Open Access Journals (Sweden)

    Mohammad Mohammadi

    2015-04-01

    Full Text Available Cell formation process is one of the first and the most important steps in designing cellular manufacturing systems. It consists of identifying part families according to the similarities in the design, shape, and presses of parts and dedicating machines to each part family based on the operations required by the parts. In this study, a hybrid method based on a combination of simulated annealing algorithm and dynamic programming was developed to solve a bi-objective cell formation problem with duplicate machines. In the proposed hybrid method, each solution was represented as a permutation of parts, which is created by simulated annealing algorithm, and dynamic programming was used to partition this permutation into part families and determine the number of machines in each cell such that the total dissimilarity between the parts and the total machine investment cost are minimized. The performance of the algorithm was evaluated by performing numerical experiments in different sizes. Our computational experiments indicated that the results were very encouraging in terms of computational time and solution quality.

  9. Simulation of Defect Reduction in Block Copolymer Thin Films by Solvent Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Su-Mi; Khaira, Gurdaman S.; Ramírez-Hernández, Abelardo; Müller, Marcus; Nealey, Paul F.; de Pablo, Juan J.

    2015-01-20

    Solvent annealing provides an effective means to control the self-assembly of block copolymer (BCP) thin films. Multiple effects, including swelling, shrinkage, and morphological transitions, act in concert to yield ordered or disordered structures. The current understanding of these processes is limited; by relying on a theoretically informed coarse-grained model of block copolymers, a conceptual framework is presented that permits prediction and rationalization of experimentally observed behaviors. Through proper selection of several process conditions, it is shown that a narrow window of solvent pressures exists over which one can direct a BCP material to form well-ordered, defect-free structures.

  10. 1-Dimensional simulation of thermal annealing in a commercial nuclear power plant reactor pressure vessel wall section

    International Nuclear Information System (INIS)

    Nakos, J.T.; Rosinski, S.T.; Acton, R.U.

    1994-11-01

    The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in x 1.2 m x 17.1 cm thick [4 ft x 4 ft x 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the open-quotes mirrorclose quotes insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in x 2.1 in [10 ft x 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28 degrees C/hr [12.5, 25, and 50 degrees F/hr] as measured on the heated face. A peak temperature of 454 degrees C [850 degrees F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing

  11. Using the adaptive blockset for simulation and rapid prototyping

    DEFF Research Database (Denmark)

    Ravn, Ole

    1999-01-01

    the gap between simulation and prototype controller implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally fund in adaptive controllers...... is outlined. The block types are, identification, controller design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper presents the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The basics of indirect adaptive controllers are summarized. The concept behind the Adaptive Blockset for Simulink is to bridge...

  12. Convection methodology for fission track annealing: direct and inverse numerical simulations in the multi-exponential case

    International Nuclear Information System (INIS)

    Miellou, J.C.; Igli, H.; Grivet, M.; Rebetez, M.; Chambaudet, A.

    1994-01-01

    In minerals, the uranium fission tracks are sensitive to temperature and time. The consequence is that the etchable lengths are reduced. To simulate the phenomenon, at the last International Conference on Nuclear Tracks in solids at Beijing in 1992, we proposed a convection model for fission track annealing based on a reaction situation associated with only one activation energy. Moreover a simple inverse method based on the resolution of an ordinary differential equation was described, making it possible to retrace the thermal history in this mono-exponential situation. The aim of this paper is to consider a more involved class of models including multi-exponentials associated with several activation energies. We shall describe in this framework the modelling of the direct phenomenon and the resolution of the inverse problem. Results of numerical simulations and comparison with the mono-exponential case will be presented. 5 refs. (author)

  13. Electrical Impedance Tomography Reconstruction Through Simulated Annealing using a New Outside-in Heuristic and GPU Parallelization

    International Nuclear Information System (INIS)

    Tavares, R S; Tsuzuki, M S G; Martins, T C

    2012-01-01

    Electrical Impedance Tomography (EIT) is an imaging technique that attempts to reconstruct the conductivity distribution inside an object from electrical currents and potentials applied and measured at its surface. The EIT reconstruction problem is approached as an optimization problem, where the difference between the simulated and measured distributions must be minimized. This optimization problem can be solved using Simulated Annealing (SA), but at a high computational cost. To reduce the computational load, it is possible to use an incomplete evaluation of the objective function. This algorithm showed to present an outside-in behavior, determining the impedance of the external elements first, similar to a layer striping algorithm. A new outside-in heuristic to make use of this property is proposed. It also presents the impact of using GPU for parallelizing matrix-vector multiplication and triangular solvers. Results with experimental data are presented. The outside-in heuristic showed to be faster when compared to the conventional SA algorithm.

  14. A fitting algorithm based on simulated annealing techniques for efficiency calibration of HPGe detectors using different mathematical functions

    Energy Technology Data Exchange (ETDEWEB)

    Hurtado, S. [Servicio de Radioisotopos, Centro de Investigacion, Tecnologia e Innovacion (CITIUS), Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: shurtado@us.es; Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Aptd. 1065, 41080 Sevilla (Spain); Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S.A. Universidad de Sevilla, Avda, Reina Mercedes 2, 41012 Sevilla (Spain)

    2008-09-11

    In this work several mathematical functions are compared in order to perform the full-energy peak efficiency calibration of HPGe detectors using a 126cm{sup 3} HPGe coaxial detector and gamma-ray energies ranging from 36 to 1460 keV. Statistical tests and Monte Carlo simulations were used to study the performance of the fitting curve equations. Furthermore the fitting procedure of these complex functional forms to experimental data is a non-linear multi-parameter minimization problem. In gamma-ray spectrometry usually non-linear least-squares fitting algorithms (Levenberg-Marquardt method) provide a fast convergence while minimizing {chi}{sub R}{sup 2}, however, sometimes reaching only local minima. In order to overcome that shortcoming a hybrid algorithm based on simulated annealing (HSA) techniques is proposed. Additionally a new function is suggested that models the efficiency curve of germanium detectors in gamma-ray spectrometry.

  15. Perbandingan Algoritma Simulated Annealing dan Harmony Search dalam Penerapan Picking Order Sequence

    Directory of Open Access Journals (Sweden)

    Tanti Octavia

    2017-12-01

    Full Text Available Implementation of mobile rack warehouse is commonly used in manufacturing industry because it can minimize the warehouse area used. Applying picking orders in taking of Stock Keeping Unit (SKU on mobile rack warehouses could give fast loading order. This research aims to find out which algorithm is better in applying picking order sequence in mobile rack warehouse. The algorithm used is Simualted Annealing (SA and Harmony Search (HS algorithm. Both of these algorithms will be compared in terms of the gap with the shortest path method.The result shows that the HS algorithm produces a better solution than the SA algorithm with lower CPU time, but the convergence rate of HS is lower than that of SA.HS was able to produce a better solution than the shortest path method of 9 cases, while SA only 8 cases from 15 cases.

  16. An adaptive simulation tool for evacuation scenarios

    NARCIS (Netherlands)

    Formolo, Daniel; van der Wal, C. Natalie

    2017-01-01

    Building useful and efficient models and tools for a varied audience, such as evacuation simulators for scientists, engineers and crisis managers, can be tricky. Even good models can fail in providing information when the user’s tools for the model are scarce of resources. The aim of this work is to

  17. View-Dependent Adaptive Cloth Simulation with Buckling Compensation.

    Science.gov (United States)

    Koh, Woojong; Narain, Rahul; O'Brien, James F

    2015-10-01

    This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening, while locking in extremely coarsened regions is inhibited by modifying the material model to compensate for unresolved sub-element buckling. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows. The approach produces a 2× speed-up for a single character and more than 4× for a small group as compared to view-independent adaptive simulations, and respectively 5× and 9× speed-ups as compared to non-adaptive simulations.

  18. SARAPAN-A simulated-annealing-based tool to generate random patterned-channel-age in CANDU fuel management analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kastanya, Doddy [Safety and Licensing Department, Candesco Division of Kinectrics Inc., Toronto (Canada)

    2017-02-15

    In any reactor physics analysis, the instantaneous power distribution in the core can be calculated when the actual bundle-wise burnup distribution is known. Considering the fact that CANDU (Canada Deuterium Uranium) utilizes on-power refueling to compensate for the reduction of reactivity due to fuel burnup, in the CANDU fuel management analysis, snapshots of power and burnup distributions can be obtained by simulating and tracking the reactor operation over an extended period using various tools such as the *SIMULATE module of the Reactor Fueling Simulation Program (RFSP) code. However, for some studies, such as an evaluation of a conceptual design of a next-generation CANDU reactor, the preferred approach to obtain a snapshot of the power distribution in the core is based on the patterned-channel-age model implemented in the *INSTANTAN module of the RFSP code. The objective of this approach is to obtain a representative snapshot of core conditions quickly. At present, such patterns could be generated by using a program called RANDIS, which is implemented within the *INSTANTAN module. In this work, we present an alternative approach to derive the patterned-channel-age model where a simulated-annealing-based algorithm is used to find such patterns, which produce reasonable power distributions.

  19. SARAPAN—A Simulated-Annealing-Based Tool to Generate Random Patterned-Channel-Age in CANDU Fuel Management Analyses

    Directory of Open Access Journals (Sweden)

    Doddy Kastanya

    2017-02-01

    Full Text Available In any reactor physics analysis, the instantaneous power distribution in the core can be calculated when the actual bundle-wise burnup distribution is known. Considering the fact that CANDU (Canada Deuterium Uranium utilizes on-power refueling to compensate for the reduction of reactivity due to fuel burnup, in the CANDU fuel management analysis, snapshots of power and burnup distributions can be obtained by simulating and tracking the reactor operation over an extended period using various tools such as the *SIMULATE module of the Reactor Fueling Simulation Program (RFSP code. However, for some studies, such as an evaluation of a conceptual design of a next-generation CANDU reactor, the preferred approach to obtain a snapshot of the power distribution in the core is based on the patterned-channel-age model implemented in the *INSTANTAN module of the RFSP code. The objective of this approach is to obtain a representative snapshot of core conditions quickly. At present, such patterns could be generated by using a program called RANDIS, which is implemented within the *INSTANTAN module. In this work, we present an alternative approach to derive the patterned-channel-age model where a simulated-annealing-based algorithm is used to find such patterns, which produce reasonable power distributions.

  20. A Single-Machine Two-Agent Scheduling Problem by a Branch-and-Bound and Three Simulated Annealing Algorithms

    Directory of Open Access Journals (Sweden)

    Shangchia Liu

    2015-01-01

    Full Text Available In the field of distributed decision making, different agents share a common processing resource, and each agent wants to minimize a cost function depending on its jobs only. These issues arise in different application contexts, including real-time systems, integrated service networks, industrial districts, and telecommunication systems. Motivated by its importance on practical applications, we consider two-agent scheduling on a single machine where the objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent. For solving the proposed problem, a branch-and-bound and three simulated annealing algorithms are developed for the optimal solution, respectively. In addition, the extensive computational experiments are also conducted to test the performance of the algorithms.

  1. Concept for Multi-cycle Nuclear Fuel Optimization Based On Parallel Simulated Annealing With Mixing of States

    International Nuclear Information System (INIS)

    Kropaczek, David J.

    2008-01-01

    A new concept for performing nuclear fuel optimization over a multi-cycle planning horizon is presented. The method provides for an implicit coupling between traditionally separate in-core and out-of-core fuel management decisions including determination of: fresh fuel batch size, enrichment and bundle design; exposed fuel reuse; and core loading pattern. The algorithm uses simulated annealing optimization, modified with a technique called mixing of states that allows for deployment in a scalable parallel environment. Analysis of algorithm performance for a transition cycle design (i.e. a PWR 6 month cycle length extension) demonstrates the feasibility of the approach as a production tool for fuel procurement and multi-cycle core design. (authors)

  2. A hybrid simulated annealing approach to handle energy resource management considering an intensive use of electric vehicles

    DEFF Research Database (Denmark)

    Sousa, Tiago; Vale, Zita; Carvalho, Joao Paulo

    2014-01-01

    The massification of electric vehicles (EVs) can have a significant impact on the power system, requiring a new approach for the energy resource management. The energy resource management has the objective to obtain the optimal scheduling of the available resources considering distributed...... to determine the best solution in a reasonable amount of time. This paper presents a hybrid artificial intelligence technique to solve a complex energy resource management problem with a large number of resources, including EVs, connected to the electric network. The hybrid approach combines simulated...... annealing (SA) and ant colony optimization (ACO) techniques. The case study concerns different EVs penetration levels. Comparisons with a previous SA approach and a deterministic technique are also presented. For 2000 EVs scenario, the proposed hybrid approach found a solution better than the previous SA...

  3. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    Science.gov (United States)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  4. Fully Adaptive Radar Modeling and Simulation Development

    Science.gov (United States)

    2017-04-01

    using an object oriented programming (OOP) approach. It includes a FAR engine to control the operation of the perception-action (PA) cycle and...is unlimited 41 NATO North Atlantic Treaty Organization OOP object oriented programming OSU The Ohio State University PA perception-action PDF...development and testing on simulated, previously collected, and real-time streaming data. The architecture is coded in MATLAB using an object oriented

  5. Anisotropy evolution of nanoparticles under annealing: Benefits of isothermal remanent magnetization simulation

    Science.gov (United States)

    Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique

    2016-12-01

    Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner-Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.

  6. Anisotropy evolution of nanoparticles under annealing: Benefits of isothermal remanent magnetization simulation

    International Nuclear Information System (INIS)

    Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique

    2016-01-01

    Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner–Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.

  7. Anisotropy evolution of nanoparticles under annealing: Benefits of isothermal remanent magnetization simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tournus, Florent, E-mail: florent.tournus@univ-lyon1.fr; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique

    2016-12-01

    Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner–Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.

  8. Adaptive LES Methodology for Turbulent Flow Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic

  9. The atomic-scale nucleation mechanism of NiTi metallic glasses upon isothermal annealing studied via molecular dynamics simulations.

    Science.gov (United States)

    Li, Yang; Li, JiaHao; Liu, BaiXin

    2015-10-28

    Nucleation is one of the most essential transformation paths in phase transition and exerts a significant influence on the crystallization process. Molecular dynamics simulations were performed to investigate the atomic-scale nucleation mechanisms of NiTi metallic glasses upon devitrification at various temperatures (700 K, 750 K, 800 K, and 850 K). Our simulations reveal that at 700 K and 750 K, nucleation is polynuclear with high nucleation density, while at 800 K it is mononuclear. The underlying nucleation mechanisms have been clarified, manifesting that nucleation can be induced either by the initial ordered clusters (IOCs) or by the other precursors of nuclei evolved directly from the supercooled liquid. IOCs and other precursors stem from the thermal fluctuations of bond orientational order in supercooled liquids during the quenching process and during the annealing process, respectively. The simulation results not only elucidate the underlying nucleation mechanisms varied with temperature, but also unveil the origin of nucleation. These discoveries offer new insights into the devitrification mechanism of metallic glasses.

  10. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  11. A Bacterial-Based Algorithm to Simulate Complex Adaptative Systems

    OpenAIRE

    González Rodríguez, Diego; Hernández Carrión, José Rodolfo

    2014-01-01

    Paper presented at the 13th International Conference on Simulation of Adaptive Behavior which took place at Castellón, Spain in 2014, July 22-25. Bacteria have demonstrated an amazing capacity to overcome envi-ronmental changes by collective adaptation through genetic exchanges. Using a distributed communication system and sharing individual strategies, bacteria propagate mutations as innovations that allow them to survive in different envi-ronments. In this paper we present an agent-based...

  12. The application of neutral network integrated with genetic algorithm and simulated annealing for the simulation of rare earths separation processes by the solvent extraction technique using EHEHPA agent

    International Nuclear Information System (INIS)

    Tran Ngoc Ha; Pham Thi Hong Ha

    2003-01-01

    In the present work, neutral network has been used for mathematically modeling equilibrium data of the mixture of two rare earth elements, namely Nd and Pr with PC88A agent. Thermo-genetic algorithm based on the idea of the genetic algorithm and the simulated annealing algorithm have been used in the training procedure of the neutral networks, giving better result in comparison with the traditional modeling approach. The obtained neutral network modeling the experimental data is further used in the computer program to simulate the solvent extraction process of two elements Nd and Pr. Based on this computer program, various optional schemes for the separation of Nd and Pr have been investigated and proposed. (author)

  13. Simulation and Rapid Prototyping of Adaptive Control Systems using the Adaptive Blockset for Simulink

    DEFF Research Database (Denmark)

    Ravn, Ole

    1998-01-01

    The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller implement...... design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...

  14. Adaptive Annealed Importance Sampling for Multimodal Posterior Exploration and Model Selection with Application to Extrasolar Planet Detection

    Science.gov (United States)

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  15. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  16. The relative entropy is fundamental to adaptive resolution simulations

    Science.gov (United States)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  17. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    Science.gov (United States)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  18. Development of an adaptive sawmill- flow simulator template for ...

    African Journals Online (AJOL)

    Development of an adaptive sawmill- flow simulator template for predicting results ... including: raw materials, personnel, equipment, product mix, product quality, orders ... Profitable sawing of small diameter logs requires high speed processing, use of ... performance measures due to changes in mill layout, raw material and ...

  19. The behavior of adaptive bone-remodeling simulation models

    NARCIS (Netherlands)

    H.H. Weinans (Harrie); R. Huiskes (Rik); H.J. Grootenboer

    1992-01-01

    textabstractThe process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule

  20. Ensemble annealing of complex physical systems

    OpenAIRE

    Habeck, Michael

    2015-01-01

    Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is th...

  1. Simulation for noise cancellation using LMS adaptive filter

    Science.gov (United States)

    Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung

    2017-06-01

    In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.

  2. Effect of vergence adaptation on convergence-accommodation: model simulations.

    Science.gov (United States)

    Sreenivasan, Vidhyapriya; Bobier, William R; Irving, Elizabeth L; Lakshminarayanan, Vasudevan

    2009-10-01

    Several theoretical control models depict the adaptation effects observed in the accommodation and vergence mechanisms of the human visual system. Two current quantitative models differ in their approach of defining adaptation and in identifying the effect of controller adaptation on their respective cross-links between the vergence and accommodative systems. Here, we compare the simulation results of these adaptation models with empirical data obtained from emmetropic adults when they performed sustained near task through + 2D lens addition. The results of our experimental study showed an initial increase in exophoria (a divergent open-loop vergence position) and convergence-accommodation (CA) when viewing through +2D lenses. Prolonged fixation through the near addition lenses initiated vergence adaptation, which reduced the lens-induced exophoria and resulted in a concurrent reduction of CA. Both models showed good agreement with empirical measures of vergence adaptation. However, only one model predicted the experimental time course of reduction in CA. The pattern of our empirical results seem to be best described by the adaptation model that indicates the total vergence response to be a sum of two controllers, phasic and tonic, with the output of phasic controller providing input to the cross-link interactions.

  3. GPU accelerated population annealing algorithm

    Science.gov (United States)

    Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature

  4. INTRODUCCIÓN DE ELEMENTOS DE MEMORIA EN EL MÉTODO SIMULATED ANNEALING PARA RESOLVER PROBLEMAS DE PROGRAMACIÓN MULTIOBJETIVO DE MÁQUINAS PARALELAS INTRODUCTION OF MEMORY ELEMENTS IN SIMULATED ANNEALING METHOD TO SOLVE MULTIOBJECTIVE PARALLEL MACHINE SCHEDULING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Felipe Baesler

    2008-12-01

    Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.

  5. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    Science.gov (United States)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  6. Modeling and Simulated Annealing Optimization of Surface Roughness in CO2 Laser Nitrogen Cutting of Stainless Steel

    Directory of Open Access Journals (Sweden)

    M. Madić

    2013-09-01

    Full Text Available This paper presents a systematic methodology for empirical modeling and optimization of surface roughness in nitrogen, CO2 laser cutting of stainless steel . The surface roughness prediction model was developed in terms of laser power , cutting speed , assist gas pressure and focus position by using The artificial neural network ( ANN . To cover a wider range of laser cutting parameters and obtain an experimental database for the ANN model development, Taguchi 's L27 orthogonal array was implemented in the experimental plan. The developed ANN model was expressed as an explicit nonlinear function , while the influence of laser cutting parameters and their interactions on surface roughness were analyzed by generating 2D and 3D plots . The final goal of the experimental study Focuses on the determinationof the optimum laser cutting parameters for the minimization of surface roughness . Since the solution space of the developed ANN model is complex, and the possibility of many local solutions is great, simulated annealing (SA was selected as a method for the optimization of surface roughness.

  7. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  8. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  9. Displacement cascades and defects annealing in tungsten, Part I: Defect database from molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Setyawan, Wahyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Nandipati, Giridhar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Roche, Kenneth J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Univ. of Washington, Seattle, WA (United States); Heinisch, Howard L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Kurtz, Richard J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-07-01

    Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780×Ed, where Ed = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250×Ed. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5Tm) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.

  10. Selection for autochthonous bifidobacteial isolates adapted to simulated gastrointestinal fluid

    Directory of Open Access Journals (Sweden)

    H Jamalifar

    2010-03-01

    Full Text Available "nBackground and the purpose of the study: Bifidobacterial strains are excessively sensitive to acidic conditions and this can affect their living ability in the stomach and fermented foods, and as a result, restrict their use as live probiotic cultures. The aim of the present study was to obtain bifidobacterial isolates with augmented tolerance to simulated gastrointestinal condition using cross-protection method. "nMethods: Individual bifidobacterial strains were treated in acidic environment and also in media containing bile salts and NaCl. Viability of the acid and acid-bile-NaCl tolerant isolates was further examined in simulated gastric and small intestine by subsequent incubation of the probiotic bacteria in the corresponding media for 120 min. Antipathogenic activities of the adapted isolates were compared with those of the original strains. "nResults and major conclusion: The acid and acid-bile-NaCl adapted isolates showed improved viabilities significantly (p<0.05 in simulated gastric fluid compared to their parent strains. The levels of reduction in bacterial count (Log cfu/ml of the acid and acid-bile-NaCl adapted isolates obtained in simulated gastric fluid ranged from 0.64-3.06 and 0.36-2.43 logarithmic units after 120 min of incubation. There was no significant difference between the viability of the acid-bile-NaCl-tolerant isolates and the original strains in simulated small intestinal condition except for Bifidobacterium adolescentis (p<0.05. The presence of 15 ml of supernatants of acid-bile-NaCl-adapted isolates and also those of the initial Bifidobacterium strains inhibited pathogenic bacterial growth for 24 hrs. Probiotic bacteria with improved ability to survive in harsh gastrointestinal environment could be obtained by subsequent treatment of the strains in acid, bile salts and NaCl environments.

  11. Searching for stable Si(n)C(n) clusters: combination of stochastic potential surface search and pseudopotential plane-wave Car-Parinello simulated annealing simulations.

    Science.gov (United States)

    Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu

    2013-07-22

    To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  12. Searching for Stable SinCn Clusters: Combination of Stochastic Potential Surface Search and Pseudopotential Plane-Wave Car-Parinello Simulated Annealing Simulations

    Directory of Open Access Journals (Sweden)

    Larry W. Burggraf

    2013-07-01

    Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  13. simulate_CAT: A Computer Program for Post-Hoc Simulation for Computerized Adaptive Testing

    Directory of Open Access Journals (Sweden)

    İlker Kalender

    2015-06-01

    Full Text Available This paper presents a computer software developed by the author. The software conducts post-hoc simulations for computerized adaptive testing based on real responses of examinees to paper and pencil tests under different parameters that can be defined by user. In this paper, short information is given about post-hoc simulations. After that, the working principle of the software is provided and a sample simulation with required input files is shown. And last, output files are described

  14. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    Science.gov (United States)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  15. Semiconductor annealing

    International Nuclear Information System (INIS)

    Young, J.M.; Scovell, P.D.

    1982-01-01

    A process for annealing crystal damage in ion implanted semiconductor devices in which the device is rapidly heated to a temperature between 450 and 900 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. (author)

  16. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    International Nuclear Information System (INIS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-01-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than

  17. Adaptive implicit method for thermal compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)

    2008-10-15

    As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.

  18. Adaptive resolution simulation of an atomistic protein in MARTINI water

    International Nuclear Information System (INIS)

    Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J.; Praprotnik, Matej

    2014-01-01

    We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations

  19. An adaptive nonlinear solution scheme for reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)

    1996-12-31

    Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.

  20. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    Science.gov (United States)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  1. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    Science.gov (United States)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  2. Dynamically adaptive data-driven simulation of extreme hydrological flows

    Science.gov (United States)

    Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint

    2018-02-01

    Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.

  3. Dynamically adaptive data-driven simulation of extreme hydrological flows

    KAUST Repository

    Kumar Jain, Pushkar

    2017-12-27

    Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.

  4. Semiconductor annealing

    International Nuclear Information System (INIS)

    Young, J.M.; Scovell, P.D.

    1981-01-01

    A process for annealing crystal damage in ion implanted semiconductor devices is described in which the device is rapidly heated to a temperature between 450 and 600 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. The process may be enhanced by the application of optical radiation from a Xenon lamp. (author)

  5. Decentralized adaptive control of manipulators - Theory, simulation, and experimentation

    Science.gov (United States)

    Seraji, Homayoun

    1989-01-01

    The author presents a simple decentralized adaptive-control scheme for multijoint robot manipulators based on the independent joint control concept. The control objective is to achieve accurate tracking of desired joint trajectories. The proposed control scheme does not use the complex manipulator dynamic model, and each joint is controlled simply by a PID (proportional-integral-derivative) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. Simulation results are given for a two-link direct-drive manipulator under adaptive independent joint control. The results illustrate trajectory tracking under coupled dynamics and varying payload. The proposed scheme is implemented on a MicroVAX II computer for motion control of the three major joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite coupled nonlinear joint dynamics.

  6. Strategies in edge plasma simulation using adaptive dynamic nodalization techniques

    International Nuclear Information System (INIS)

    Kainz, A.; Weimann, G.; Kamelander, G.

    2003-01-01

    A wide span of steady-state and transient edge plasma processes simulation problems require accurate discretization techniques and can then be treated with Finite Element (FE) and Finite Volume (FV) methods. The software used here to meet these meshing requirements is a 2D finite element grid generator. It allows to produce adaptive unstructured grids taking into consideration the flux surface characteristics. To comply with the common mesh handling features of FE/FV packages, some options have been added to the basic generation tool. These enhancements include quadrilateral meshes without non-regular transition elements obtained by substituting them by transition constructions consisting of regular quadrilateral elements. Furthermore triangular grids can be created with one edge parallel to the magnetic field and modified by the basic adaptation/realignment techniques. Enhanced code operation properties and processing capabilities are expected. (author)

  7. A parallel adaptive finite difference algorithm for petroleum reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Hai Minh

    2005-07-01

    Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)

  8. Adaptive mesh refinement and adjoint methods in geophysics simulations

    Science.gov (United States)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  9. Direct numerical simulation of bubbles with parallelized adaptive mesh refinement

    International Nuclear Information System (INIS)

    Talpaert, A.

    2015-01-01

    The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)

  10. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...... the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...

  11. Quantum Annealing and Quantum Fluctuation Effect in Frustrated Ising Systems

    OpenAIRE

    Tanaka, Shu; Tamura, Ryo

    2012-01-01

    Quantum annealing method has been widely attracted attention in statistical physics and information science since it is expected to be a powerful method to obtain the best solution of optimization problem as well as simulated annealing. The quantum annealing method was incubated in quantum statistical physics. This is an alternative method of the simulated annealing which is well-adopted for many optimization problems. In the simulated annealing, we obtain a solution of optimization problem b...

  12. Hydrodynamics in adaptive resolution particle simulations: Multiparticle collision dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Alekseeva, Uliana, E-mail: Alekseeva@itc.rwth-aachen.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); German Research School for Simulation Sciences (GRS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Winkler, Roland G., E-mail: r.winkler@fz-juelich.de [Theoretical Soft Matter and Biophysics, Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Sutmann, Godehard, E-mail: g.sutmann@fz-juelich.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); ICAMS, Ruhr-University Bochum, D-44801 Bochum (Germany)

    2016-06-01

    A new adaptive resolution technique for particle-based multi-level simulations of fluids is presented. In the approach, the representation of fluid and solvent particles is changed on the fly between an atomistic and a coarse-grained description. The present approach is based on a hybrid coupling of the multiparticle collision dynamics (MPC) method and molecular dynamics (MD), thereby coupling stochastic and deterministic particle-based methods. Hydrodynamics is examined by calculating velocity and current correlation functions for various mixed and coupled systems. We demonstrate that hydrodynamic properties of the mixed fluid are conserved by a suitable coupling of the two particle methods, and that the simulation results agree well with theoretical expectations.

  13. The effect of residual thermal stresses on the fatigue crack growth of laser-surface-annealed AISI 304 stainless steel Part I: computer simulation

    International Nuclear Information System (INIS)

    Shiue, R.K.; Chang, C.T.; Young, M.C.; Tsay, L.W.

    2004-01-01

    The effect of residual thermal stresses on the fatigue crack growth of the laser-surface-annealed AISI 304 stainless steel, especially the effect of stress redistribution ahead of the crack tip was extensively evaluated in the study. Based on the finite element simulation, the longitudinal residual tensile stress field has a width of roughly 20 mm on the laser-irradiated surface and was symmetric with respect to the centerline of the laser-annealed zone (LAZ). Meanwhile, residual compressive stresses distributed over a wide region away from the LAZ. After introducing a notch perpendicular to the LAZ, the distribution of longitudinal residual stresses became unsymmetrical about the centerline of LAZ. High residual compressive stresses exist within a narrow range ahead of notch tip. The improved crack growth resistance of the laser-annealed specimen might be attributed to those induced compressive stresses. As the notch tip passed through the centerline of the LAZ, the residual stress ahead of the notch tip was completely reverted into residual tensile stresses. The existence of unanimous residual tensile stresses ahead of the notch tip was maintained, even if the notch tip extended deeply into the LAZ. Additionally, the presence of the residual tensile stress ahead of the notch tip did not accelerate the fatigue crack growth rate in the compact tension specimen

  14. Metaheurística Simulated Annealing para solução de problemas de planejamento florestal com restrições de integridade Simulated Annealing metaheuristic to solve forest planning problem with integer constraints

    Directory of Open Access Journals (Sweden)

    Flávio Lopes Rodrigues

    2004-04-01

    Full Text Available Os objetivos deste trabalho foram desenvolver e testar a metaheurística SA para solução de problemas de gerenciamento florestal com restrições de integridade. O algoritmo SA desenvolvido foi testado em quatro problemas, contendo entre 93 e 423 variáveis de decisão, sujeitos às restrições de singularidade, produção mínima e produção máxima, periodicamente. Todos os problemas tiveram como objetivo a maximização do valor presente líquido. O algoritmo SA foi codificado em liguagem delphi 5.0 e os testes foram efetuados em um microcomputador AMD K6II 500 MHZ, com memória RAM de 64 MB e disco rígido de 15GB. O desempenho da SA foi avaliado de acordo com as medidas de eficácia e eficiência. Os diferentes valores ou categorias dos parâmetros da SA foram testados e comparados quanto aos seus efeitos na eficácia do algoritmo. A seleção da melhor configuração de parâmetros foi feita com o teste L&O, a 1% de probabilidade, e as análises foram realizadas através de estatísticas descritivas. A melhor configuração de parâmetros propiciou à SA eficácia média de 95,36%, valor mínimo de 83,66%, valor máximo de 100% e coeficiente de variação igual a 3,18% do ótimo matemático obtido pelo algoritmo exato branch and bound. Para o problema de maior porte, a eficiência da SA foi dez vezes superior à eficiência do algoritmo exato branch and bound. O bom desempenho desta heurística reforçou as conclusões, tiradas em outros trabalhos, do seu enorme potencial para resolver importantes problemas de gerenciamento florestal de difícil solução pelos instrumentos computacionais da atualidade.The objectives of this work was to develop and test an algorithm based on Simulated Annealing (SA metaheuristic to solve problems of forest management with integer constraints. The algorithm SA developed was tested in five problems containing between 93 and 423 decision variables, periodically subject to singularity constraints, minimum

  15. Grazing incidence X-ray diffraction study of the tilted phases of Langmuir films: Determination of molecular conformations using simulated annealing

    International Nuclear Information System (INIS)

    Pignat, J.; Daillant, J.; Cantin, S.; Perrot, F.; Konovalov, O.

    2007-01-01

    We have analyzed grazing incidence X-ray diffraction (GIXD) data from condensed phases of Langmuir films of long-chain fatty acids at the air-water using a new method consisting in a careful extraction of the structure factors followed by fitting of molecular parameters using simulated annealing. We show that the information contained in GIXD spectra is enough to obtain near-atomic structural information. In particular, we directly determine the orientation of the chain backbone planes and of the carboxylic headgroups, and we evaluate chain conformation defects

  16. Grazing incidence X-ray diffraction study of the tilted phases of Langmuir films: Determination of molecular conformations using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Pignat, J. [LIONS/Service de Chimie Moleculaire, CEA-Saclay bat. 125, F-91191 Gif-sur-Yvette Cedex (France); LPPI, universite de Cergy-Pontoise, 5 mail Gay-Lussac Neuville/Oise, 95031 Cergy-Pontoise Cedex (France); Daillant, J. [LIONS/Service de Chimie Moleculaire, CEA-Saclay bat. 125, F-91191 Gif-sur-Yvette Cedex (France)]. E-mail: jean.daillant@cea.fr; Cantin, S. [LPPI, universite de Cergy-Pontoise, 5 mail Gay-Lussac Neuville/Oise, 95031 Cergy-Pontoise Cedex (France); Perrot, F. [LPPI, universite de Cergy-Pontoise, 5 mail Gay-Lussac Neuville/Oise, 95031 Cergy-Pontoise Cedex (France); Konovalov, O. [ESRF, 6 rue Jules Horowitz, BP220, 38043 Grenoble Cedex (France)

    2007-05-23

    We have analyzed grazing incidence X-ray diffraction (GIXD) data from condensed phases of Langmuir films of long-chain fatty acids at the air-water using a new method consisting in a careful extraction of the structure factors followed by fitting of molecular parameters using simulated annealing. We show that the information contained in GIXD spectra is enough to obtain near-atomic structural information. In particular, we directly determine the orientation of the chain backbone planes and of the carboxylic headgroups, and we evaluate chain conformation defects.

  17. Multi-objective optimization of in-situ bioremediation of groundwater using a hybrid metaheuristic technique based on differential evolution, genetic algorithms and simulated annealing

    Directory of Open Access Journals (Sweden)

    Kumar Deepak

    2015-12-01

    Full Text Available Groundwater contamination due to leakage of gasoline is one of the several causes which affect the groundwater environment by polluting it. In the past few years, In-situ bioremediation has attracted researchers because of its ability to remediate the contaminant at its site with low cost of remediation. This paper proposed the use of a new hybrid algorithm to optimize a multi-objective function which includes the cost of remediation as the first objective and residual contaminant at the end of the remediation period as the second objective. The hybrid algorithm was formed by combining the methods of Differential Evolution, Genetic Algorithms and Simulated Annealing. Support Vector Machines (SVM was used as a virtual simulator for biodegradation of contaminants in the groundwater flow. The results obtained from the hybrid algorithm were compared with Differential Evolution (DE, Non Dominated Sorting Genetic Algorithm (NSGA II and Simulated Annealing (SA. It was found that the proposed hybrid algorithm was capable of providing the best solution. Fuzzy logic was used to find the best compromising solution and finally a pumping rate strategy for groundwater remediation was presented for the best compromising solution. The results show that the cost incurred for the best compromising solution is intermediate between the highest and lowest cost incurred for other non-dominated solutions.

  18. Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc

    2016-09-12

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.

  19. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    Science.gov (United States)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  20. Evaluating impact of market changes on increasing cell-load variation in dynamic cellular manufacturing systems using a hybrid Tabu search and simulated annealing algorithms

    Directory of Open Access Journals (Sweden)

    Aidin Delgoshaei

    2016-06-01

    Full Text Available In this paper, a new method is proposed for scheduling dynamic cellular manufacturing systems (D-CMS in the presence of uncertain product demands. The aim of this method is to control the process of trading off between in-house manufacturing and outsourcing while product demands are uncertain and can be varied from period to period. To solve the proposed problem, a hybrid Tabu Search and Simulated Annealing are developed to overcome hardness of the proposed model and then results are compared with a Branch and Bound and Simulated Annealing algorithms. A Taguchi method (L_27 orthogonal optimization is used to estimate parameters of the proposed method in order to solve experiments derived from literature. An in-depth analysis is conducted on the results in consideration of various factors. For evaluating the system imbalance in dynamic market demands, a new measuring index is developed. Our findings indicate that the uncertain condition of market demands affects the routing of product parts and may induce machine-load variations that yield to cell-load diversity. The results showed that the proposed hybrid method can provide solutions with better quality.

  1. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    International Nuclear Information System (INIS)

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-01

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  2. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application

  3. Determination of performance criteria of safety systems in a nuclear power plant via simulated annealing optimization method

    International Nuclear Information System (INIS)

    Jung, Woo Sik

    1993-02-01

    This study presents and efficient methodology that derives design alternatives and performance criteria of safety functions/systems in commercial nuclear power plants. Determination of design alternatives and intermediate-level performance criteria is posed as a reliability allocation problem. The reliability allocation is performed for determination of reliabilities of safety functions/systems from top-level performance criteria. The reliability allocation is a very difficult multi objective optimization problem (MOP) as well as a global optimization problem with many local minima. The weighted Chebyshev norm (WCN) approach in combination with an improved Metropolis algorithm of simulated annealing is developed and applied to the reliability allocation problem. The hierarchy of probabilistic safety criteria (PSC) may consist of three levels, which ranges from the overall top level (e.g., core damage frequency, acute fatality and latent cancer fatality) through the interlnediate level (e.g., unavailiability of safety system/function) to the low level (e.g., unavailability of components, component specifications or human error). In order to determine design alternatives of safety functions/systems and the intermediate-level PSC, the reliability allocation is performed from the top-level PSC. The intermediated level corresponds to an objective space and the top level is related to a risk space. The reliability allocation is performed by means of a concept of two-tier noninferior solutions in the objective and risk spaces within the top-level PSC. In this study, two kinds of towtier noninferior solutions are defined: intolerable intermediate-level PSC and desirable design alternatives of safety functions/systems that are determined from Sets 1 and 2, respectively. Set 1 is obtained by maximizing simultaneously not only safety function/system unavailabilities but also risks. Set 1 reflects safety function/system unavailabilities in the worst case. Hence, the

  4. A novel approach in optimization problem for research reactors fuel plate using a synergy between cellular automata and quasi-simulated annealing methods

    International Nuclear Information System (INIS)

    Barati, Ramin

    2014-01-01

    Highlights: • An innovative optimization technique for multi-objective optimization is presented. • The technique utilizes combination of CA and quasi-simulated annealing. • Mass and deformation of fuel plate are considered as objective functions. • Computational burden is significantly reduced compared to classic tools. - Abstract: This paper presents a new and innovative optimization technique utilizing combination of cellular automata (CA) and quasi-simulated annealing (QSA) as solver concerning conceptual design optimization which is indeed a multi-objective optimization problem. Integrating CA and QSA into a unified optimizer tool has a great potential for solving multi-objective optimization problems. Simulating neighborhood effects while taking local information into account from CA and accepting transitions based on decreasing of objective function and Boltzmann distribution from QSA as transition rule make this tool effective in multi-objective optimization. Optimization of fuel plate safety design while taking into account major goals of conceptual design such as improving reliability and life-time – which are the most significant elements during shutdown – is a major multi-objective optimization problem. Due to hugeness of search space in fuel plate optimization problem, finding optimum solution in classical methods requires a huge amount of calculation and CPU time. The CA models, utilizing local information, require considerably less computation. In this study, minimizing both mass and deformation of fuel plate of a multipurpose research reactor (MPRR) are considered as objective functions. Results, speed, and qualification of proposed method are comparable with those of genetic algorithm and neural network methods applied to this problem before

  5. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    International Nuclear Information System (INIS)

    Angland, P.; Haberberger, D.; Ivancic, S. T.; Froula, D. H.

    2017-01-01

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of the χ2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.

  6. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-01-01

    outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  7. Optimization of pulsed current GTAW process parameters for sintered hot forged AISI 4135 P/M steel welds by simulated annealing and genetic algorithm

    International Nuclear Information System (INIS)

    Joseph, Joby; Muthukumaran, S.

    2016-01-01

    Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters

  8. Optimization of pulsed current GTAW process parameters for sintered hot forged AISI 4135 P/M steel welds by simulated annealing and genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Joby; Muthukumaran, S. [National Institute of Technology, Tamil Nadu (India)

    2016-01-15

    Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters.

  9. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  10. 1-(2-furoyl)-3,3-(diphenyl)thiourea: spectroscopic characterization and structural study from X-ray powder diffraction using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Estevez H, O.; Duque, J. [Universidad de La Habana, Instituto de Ciencia y Tecnologia de Materiales, 10400 La Habana (Cuba); Rodriguez H, J. [UNAM, Instituto de Investigaciones en Materiales, 04510 Mexico D. F. (Mexico); Yee M, H., E-mail: oestevezh@yahoo.com [Instituto Politecnico Nacional, Escuela Superior de Fisica y Matematicas, 07738 Mexico D. F. (Mexico)

    2015-07-01

    1-Furoyl-3,3-diphenylthiourea (FDFT) was synthesized, and characterized by Ftir, {sup 1}H and {sup 13}C NMR and ab initio X-ray powder structure analysis. FDFT crystallizes in the monoclinic space group P2{sub 1} with a = 12.691(1), b = 6.026(2), c = 11.861(1) A, β = 117.95(2) and V = 801.5(3) A{sup 3}. The crystal structure has been determined from laboratory X-ray powder diffraction data using direct space global optimization strategy (simulated annealing) followed by the Rietveld refinement. The thiourea group makes a dihedral angle of 73.8(6) with the furoyl group. In the crystal structure, molecules are linked by van der Waals interactions, forming one-dimensional chains along the a axis. (Author)

  11. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  12. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  13. A Pseudo-Parallel Genetic Algorithm Integrating Simulated Annealing for Stochastic Location-Inventory-Routing Problem with Consideration of Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2015-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.

  14. Application of simulated annealing in simulation and optimization of drying process of Zea mays malt Aplicação do simulated annealing na simulação e otimização do processo de secagem do malte de Zea mays

    Directory of Open Access Journals (Sweden)

    Marco A. C. Benvenga

    2011-10-01

    Full Text Available Kinetic simulation and drying process optimization of corn malt by Simulated Annealing (SA for estimation of temperature and time parameters in order to preserve maximum amylase activity in the obtained product are presented here. Germinated corn seeds were dried at 54-76 °C in a convective dryer, with occasional measurement of moisture content and enzymatic activity. The experimental data obtained were submitted to modeling. Simulation and optimization of the drying process were made by using the SA method, a randomized improvement algorithm, analogous to the simulated annealing process. Results showed that seeds were best dried between 3h and 5h. Among the models used in this work, the kinetic model of water diffusion into corn seeds showed the best fitting. Drying temperature and time showed a square influence on the enzymatic activity. Optimization through SA showed the best condition at 54 ºC and between 5.6h and 6.4h of drying. Values of specific activity in the corn malt were found between 5.26±0.06 SKB/mg and 15.69±0,10% of remaining moisture.Este trabalho objetivou a simulação da cinética e a otimização do processo de secagem do malte de milho por meio da técnica Simulated Annealing (SA, para estimação dos parâmetros de temperatura e tempo, tais que mantenham a atividade máxima das enzimas amilases no produto obtido. Para tanto, as sementes de milho germinadas foram secas entre 54-76°C, em um secador convectivo de ar. De tempo em tempo, a umidade e a atividade enzimática foram medidas. Esses dados experimentais foram usados para testar os modelos. A simulação e a otimização do processo foram feitas por meio do método SA, um algoritmo de melhoria randômica, análogo ao processo de têmpera simulada. Os resultados mostram que as sementes estavam secas após 3 h ou 5 h de secagem. Entre os modelos usados, o modelo cinético de difusão da água através das sementes apresentou o melhor ajuste. O tempo e a temperatura

  15. Learner-Adaptive Educational Technology for Simulation in Healthcare: Foundations and Opportunities.

    Science.gov (United States)

    Lineberry, Matthew; Dev, Parvati; Lane, H Chad; Talbot, Thomas B

    2018-06-01

    Despite evidence that learners vary greatly in their learning needs, practical constraints tend to favor ''one-size-fits-all'' educational approaches, in simulation-based education as elsewhere. Adaptive educational technologies - devices and/or software applications that capture and analyze relevant data about learners to select and present individually tailored learning stimuli - are a promising aid in learners' and educators' efforts to provide learning experiences that meet individual needs. In this article, we summarize and build upon the 2017 Society for Simulation in Healthcare Research Summit panel discussion on adaptive learning. First, we consider the role of adaptivity in learning broadly. We then outline the basic functions that adaptive learning technologies must implement and the unique affordances and challenges of technology-based approaches for those functions, sharing an illustrative example from healthcare simulation. Finally, we consider future directions for accelerating research, development, and deployment of effective adaptive educational technology and techniques in healthcare simulation.

  16. Mathematical foundation of quantum annealing

    International Nuclear Information System (INIS)

    Morita, Satoshi; Nishimori, Hidetoshi

    2008-01-01

    Quantum annealing is a generic name of quantum algorithms that use quantum-mechanical fluctuations to search for the solution of an optimization problem. It shares the basic idea with quantum adiabatic evolution studied actively in quantum computation. The present paper reviews the mathematical and theoretical foundations of quantum annealing. In particular, theorems are presented for convergence conditions of quantum annealing to the target optimal state after an infinite-time evolution following the Schroedinger or stochastic (Monte Carlo) dynamics. It is proved that the same asymptotic behavior of the control parameter guarantees convergence for both the Schroedinger dynamics and the stochastic dynamics in spite of the essential difference of these two types of dynamics. Also described are the prescriptions to reduce errors in the final approximate solution obtained after a long but finite dynamical evolution of quantum annealing. It is shown there that we can reduce errors significantly by an ingenious choice of annealing schedule (time dependence of the control parameter) without compromising computational complexity qualitatively. A review is given on the derivation of the convergence condition for classical simulated annealing from the view point of quantum adiabaticity using a classical-quantum mapping

  17. Electrode Materials, Thermal Annealing Sequences, and Lateral/Vertical Phase Separation of Polymer Solar Cells from Multiscale Molecular Simulations

    KAUST Repository

    Lee, Cheng-Kuang; Wodo, Olga; Ganapathysubramanian, Baskar; Pao, Chun-Wei

    2014-01-01

    . Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link

  18. A comprehensive solution for simulating ultra-shallow junctions: From high dose/low energy implant to diffusion annealing

    International Nuclear Information System (INIS)

    Boucard, F.; Roger, F.; Chakarov, I.; Zhuk, V.; Temkin, M.; Montagner, X.; Guichard, E.; Mathiot, D.

    2005-01-01

    This paper presents a global approach permitting accurate simulation of the process of ultra-shallow junctions. Physically based models of dopant implantation (BCA) and diffusion (including point and extended defects coupling) are integrated within a unique simulation tool. A useful set of the relevant parameters has been obtained through an original calibration methodology. It is shown that this approach provides an efficient tool for process modelling

  19. A comprehensive solution for simulating ultra-shallow junctions: From high dose/low energy implant to diffusion annealing

    Energy Technology Data Exchange (ETDEWEB)

    Boucard, F. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France)]. E-mail: Frederic.Boucard@silvaco.com; Roger, F. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Chakarov, I. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Zhuk, V. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Temkin, M. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Montagner, X. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Guichard, E. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Mathiot, D. [InESS, CNRS and Universite Louis Pasteur, 23 Rue du Loess, F67037 Strasbourg (France)]. E-mail: Daniel.Mathiot@iness.c-strasbourg.fr

    2005-12-05

    This paper presents a global approach permitting accurate simulation of the process of ultra-shallow junctions. Physically based models of dopant implantation (BCA) and diffusion (including point and extended defects coupling) are integrated within a unique simulation tool. A useful set of the relevant parameters has been obtained through an original calibration methodology. It is shown that this approach provides an efficient tool for process modelling.

  20. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  1. A computer simulation of an adaptive noise canceler with a single input

    Science.gov (United States)

    Albert, Stuart D.

    1991-06-01

    A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.

  2. Temperature Scaling Law for Quantum Annealing Optimizers.

    Science.gov (United States)

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  3. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    Science.gov (United States)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  4. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    International Nuclear Information System (INIS)

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  5. Computerized adaptive measurement of depression: A simulation study

    Directory of Open Access Journals (Sweden)

    Mammen Oommen

    2004-05-01

    Full Text Available Abstract Background Efficient, accurate instruments for measuring depression are increasingly important in clinical practice. We developed a computerized adaptive version of the Beck Depression Inventory (BDI. We examined its efficiency and its usefulness in identifying Major Depressive Episodes (MDE and in measuring depression severity. Methods Subjects were 744 participants in research studies in which each subject completed both the BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale. Results The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%, equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21 items. The adaptive latent depression score correlated r = .92 with the BDI total score and the latent depression score correlated more highly with the Hamilton (r = .74 than the BDI total score did (r = .70. Conclusions Adaptive testing for depression may provide greatly increased efficiency without loss of accuracy in identifying MDE or in measuring depression severity.

  6. Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors

    Science.gov (United States)

    Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.

    2017-12-01

    We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data

  7. Adaptive Optics Simulation for the World's Largest Telescope on Multicore Architectures with Multiple GPUs

    KAUST Repository

    Ltaief, Hatem; Gratadour, Damien; Charara, Ali; Gendron, Eric

    2016-01-01

    We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used

  8. Non-linear modeling of 1H NMR metabonomic data using kernel-based orthogonal projections to latent structures optimized by simulated annealing

    International Nuclear Information System (INIS)

    Fonville, Judith M.; Bylesjoe, Max; Coen, Muireann; Nicholson, Jeremy K.; Holmes, Elaine; Lindon, John C.; Rantalainen, Mattias

    2011-01-01

    Highlights: → Non-linear modeling of metabonomic data using K-OPLS. → automated optimization of the kernel parameter by simulated annealing. → K-OPLS provides improved prediction performance for exemplar spectral data sets. → software implementation available for R and Matlab under GPL v2 license. - Abstract: Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of

  9. Adaptive resolution simulation of supramolecular water : The concurrent making, breaking, and remaking of water bundles

    NARCIS (Netherlands)

    Zavadlav, Julija; Marrink, Siewert J; Praprotnik, Matej

    The adaptive resolution scheme (AdResS) is a multiscale molecular dynamics simulation approach that can concurrently couple atomistic (AT) and coarse-grained (CG) resolution regions, i.e., the molecules can freely adapt their resolution according to their current position in the system. Coupling to

  10. Developing adaptive user interfaces using a game-based simulation environment

    NARCIS (Netherlands)

    Brake, G.M. te; Greef, T.E. de; Lindenberg, J.; Rypkema, J.A.; Smets-Noor, N.J.J.M.

    2006-01-01

    In dynamic settings, user interfaces can provide more optimal support if they adapt to the context of use. Providing adaptive user interfaces to first responders may therefore be fruitful. A cognitive engineering method that incorporates development iterations in both a simulated and a real-world

  11. Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture

    Science.gov (United States)

    Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne

    2017-01-01

    This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…

  12. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  13. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  14. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  15. Spectral fitting for signal assignment and structural analysis of uniformly {sup 13}C-labeled solid proteins by simulated annealing based on chemical shifts and spin dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Matsuki, Yoh; Akutsu, Hideo; Fujiwara, Toshimichi [Osaka University, Institute for Protein Research (Japan)], E-mail: tfjwr@protein.osaka-u.ac.jp

    2007-08-15

    We describe an approach for the signal assignment and structural analysis with a suite of two-dimensional {sup 13}C-{sup 13}C magic-angle-spinning solid-state NMR spectra of uniformly {sup 13}C-labeled peptides and proteins. We directly fit the calculated spectra to experimental ones by simulated annealing in restrained molecular dynamics program CNS as a function of atomic coordinates. The spectra are calculated from the conformation dependent chemical shift obtained with SHIFTX and the cross-peak intensities computed for recoupled dipolar interactions. This method was applied to a membrane-bound 14-residue peptide, mastoparan-X. The obtained C', C{sup {alpha}} and C{sup {beta}} chemical shifts agreed with those reported previously at the precisions of 0.2, 0.7 and 0.4 ppm, respectively. This spectral fitting program also provides backbone dihedral angles with a precision of about 50 deg. from the spectra even with resonance overlaps. The restraints on the angles were improved by applying protein database program TALOS to the obtained chemical shifts. The peptide structure provided by these restraints was consistent with the reported structure at the backbone RMSD of about 1 A.

  16. Improvement of bio-corrosion resistance for Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid by annealing within supercooled liquid region.

    Science.gov (United States)

    Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C

    2015-01-01

    The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Global minimum-energy structure and spectroscopic properties of I2(*-) x n H2O clusters: a Monte Carlo simulated annealing study.

    Science.gov (United States)

    Pathak, Arup Kumar; Mukherjee, Tulsi; Maity, Dilip Kumar

    2010-01-18

    The vibrational (IR and Raman) and photoelectron spectral properties of hydrated iodine-dimer radical-anion clusters, I(2)(*-) x n H(2)O (n=1-10), are presented. Several initial guess structures are considered for each size of cluster to locate the global minimum-energy structure by applying a Monte Carlo simulated annealing procedure including spin-orbit interaction. In the Raman spectrum, hydration reduces the intensity of the I-I stretching band but enhances the intensity of the O-H stretching band of water. Raman spectra of more highly hydrated clusters appear to be simpler than the corresponding IR spectra. Vibrational bands due to simultaneous stretching vibrations of O-H bonds in a cyclic water network are observed for I(2)(*-) x n H(2)O clusters with n > or = 3. The vertical detachment energy (VDE) profile shows stepwise saturation that indicates closing of the geometrical shell in the hydrated clusters on addition of every four water molecules. The calculated VDE of finite-size small hydrated clusters is extrapolated to evaluate the bulk VDE value of I(2)(*-) in aqueous solution as 7.6 eV at the CCSD(T) level of theory. Structure and spectroscopic properties of these hydrated clusters are compared with those of hydrated clusters of Cl(2)(*-) and Br(2)(*-).

  18. Inverse planning anatomy-based dose optimization for HDR-brachytherapy of the prostate using fast simulated annealing algorithm and dedicated objective function

    International Nuclear Information System (INIS)

    Lessard, Etienne; Pouliot, Jean

    2001-01-01

    An anatomy-based dose optimization algorithm is developed to automatically and rapidly produce a highly conformal dose coverage of the target volume while minimizing urethra, bladder, and rectal doses in the delivery of an high dose-rate (HDR) brachytherapy boost for the treatment of prostate cancer. The dwell times are optimized using an inverse planning simulated annealing algorithm (IPSA) governed entirely from the anatomy extracted from a CT and by a dedicated objective function (cost function) reflecting clinical prescription and constraints. With this inverse planning approach, the focus is on the physician's prescription and constraint instead of on the technical limitations. Consequently, the physician's control on the treatment is improved. The capacity of this algorithm to represent the physician's prescription is presented for a clinical prostate case. The computation time (CPU) for IPSA optimization is less than 1 min (41 s for 142 915 iterations) for a typical clinical case, allowing fast and practical dose optimization. The achievement of highly conformal dose coverage to the target volume opens the possibility to deliver a higher dose to the prostate without inducing overdosage of urethra and normal tissues surrounding the prostate. Moreover, using the same concept, it will be possible to deliver a boost dose to a delimited tumor volume within the prostate. Finally, this method can be easily extended to other anatomical sites

  19. Rapid thermal pulse annealing

    International Nuclear Information System (INIS)

    Miller, M.G.; Koehn, B.W.; Chaplin, R.L.

    1976-01-01

    Characteristics of recovery processes have been investigated for cases of heating a sample to successively higher temperatures by means of isochronal annealing or by using a rapid pulse annealing. A recovery spectra shows the same features independent of which annealing procedure is used. In order to determine which technique provides the best resolution, a study was made of how two independent first-order processes are separated for different heating rates and time increments of the annealing pulses. It is shown that the pulse anneal method offers definite advantages over isochronal annealing when annealing for short time increments. Experimental data by means of the pulse anneal techniques are given for the various substages of stage I of aluminium. (author)

  20. High-temperature annealing of graphite: A molecular dynamics study

    Science.gov (United States)

    Petersen, Andrew; Gillette, Victor

    2018-05-01

    A modified AIREBO potential was developed to simulate the effects of thermal annealing on the structure and physical properties of damaged graphite. AIREBO parameter modifications were made to reproduce Density Functional Theory interstitial results. These changes to the potential resulted in high-temperature annealing of the model, as measured by stored-energy reduction. These results show some resemblance to experimental high-temperature annealing results, and show promise that annealing effects in graphite are accessible with molecular dynamics and reactive potentials.

  1. Complex adaptative systems and computational simulation in Archaeology

    Directory of Open Access Journals (Sweden)

    Salvador Pardo-Gordó

    2017-07-01

    Full Text Available Traditionally the concept of ‘complexity’ is used as a synonym for ‘complex society’, i.e., human groups with characteristics such as urbanism, inequalities, and hierarchy. The introduction of Nonlinear Systems and Complex Adaptive Systems to the discipline of archaeology has nuanced this concept. This theoretical turn has led to the rise of modelling as a method of analysis of historical processes. This work has a twofold objective: to present the theoretical current characterized by generative thinking in archaeology and to present a concrete application of agent-based modelling to an archaeological problem: the dispersal of the first ceramic production in the western Mediterranean.

  2. Simulation of Spacecraft Damage Tolerance and Adaptive Controls

    Science.gov (United States)

    2013-06-01

    9.9639/24/3600/180*pi*0; %nodal precession constant assumed zero here wn=kpre*(Re/(Re+h))^3.5*cos(incln); %nodal precession (zero eccentricity ) V...for 0 H spin up w_wheel=2800*(2*pi/60); % Wheel speed in RPM converted to rad/s Iwheel=0.0614*1.3558179483314; % Wheel inertia in slug-ft^2...converted (exact) to kg m^2 h_wheel=Iwheel*w_wheel; % CMG Wheel Angular Momentum % [Fossen]’s adaptive feedforward parameters ETA=-100; LAMBDA=0.5

  3. Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation

    NARCIS (Netherlands)

    Van Esch, J.M.

    2010-01-01

    Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses

  4. Intelligent Electric Power Systems with Active-Adaptive Electric Networks: Challenges for Simulation Tools

    Directory of Open Access Journals (Sweden)

    Ufa Ruslan A.

    2015-01-01

    Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of intelligent electric power systems with active-adaptive electric networks (IES including Flexible Alternating Current Transmission System (FACTS devices. The key requirements for the simulation were formed. The presented analysis of simulation results of IES confirms the need to use a hybrid modelling approach.

  5. Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator

    Science.gov (United States)

    Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.

    2017-05-01

    Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.

  6. Simulation of Fuzzy Adaptive PI Controlled Grid Interactive Inverter

    Directory of Open Access Journals (Sweden)

    Necmi ALTIN

    2009-03-01

    Full Text Available In this study, a voltage source grid interactive inverter is modeled and simulated in MATLAB/Simulink. Inverter is designed as current controlled and a fuzzy-PI current controller used for the generation of switching pattern to shape the inverter output current. The grid interactive inverter consists of a line frequency transformer and a LC type filter. Galvanic isolation between the grid and renewable energy source is obtained by the line frequency transformer and LC filter is employed to filter the high frequency harmonic components in current waveform due to PWM switching and to reduce the output current THD. Results of the MATLAB/Simulink simulation show that inverter output current is in sinusoidal waveform and in phase with line voltage, and current harmonics are in the limits of international standards (

  7. Adaptation to a simulated central scotoma during visual search training.

    Science.gov (United States)

    Walsh, David V; Liu, Lei

    2014-03-01

    Patients with a central scotoma usually use a preferred retinal locus (PRL) consistently in daily activities. The selection process and time course of the PRL development are not well understood. We used a gaze-contingent display to simulate an isotropic central scotoma in normal subjects while they were practicing a difficult visual search task. As compared to foveal search, initial exposure to the simulated scotoma resulted in prolonged search reaction time, many more fixations and unorganized eye movements during search. By the end of a 1782-trial training with the simulated scotoma, the search performance improved to within 25% of normal foveal search. Accompanying the performance improvement, there were also fewer fixations, fewer repeated fixations in the same area of the search stimulus and a clear tendency of using one area near the border of the scotoma to identify the search target. The results were discussed in relation to natural development of PRL in central scotoma patients and potential visual training protocols to facilitate PRL development. Published by Elsevier Ltd.

  8. Simulated climate adaptation in storm-water systems: Evaluating the efficiency of within-system flexibility

    Directory of Open Access Journals (Sweden)

    Adam D. McCurdy

    Full Text Available Changes in regional temperature and precipitation patterns resulting from global climate change may adversely affect the performance of long-lived infrastructure. Adaptation may be necessary to ensure that infrastructure offers consistent service and remains cost effective. But long service times and deep uncertainty associated with future climate projections make adaptation decisions especially challenging for managers. Incorporating flexibility into systems can increase their effectiveness across different climate futures but can also add significant costs. In this paper we review existing work on flexibility in climate change adaptation of infrastructure, such as robust decision-making and dynamic adaptive pathways, apply a basic typology of flexibility, and test alternative strategies for flexibility in distributed infrastructure systems comprised of multiple emplacements of a common, long-lived element: roadway culverts. Rather than treating a system of dispersed infrastructure elements as monolithic, we simulate “options flexibility” in which inherent differences in individual elements is incorporated into adaptation decisions. We use a virtual testbed of highway drainage crossing structures to examine the performance under different climate scenarios of policies that allow for multiple adaptation strategies with varying timing based on individual emplacement characteristics. Results indicate that a strategy with options flexibility informed by crossing characteristics offers a more efficient method of adaptation than do monolithic policies. In some cases this results in more cost-effective adaptation for agencies building long-lived, climate-sensitive infrastructure, even where detailed system data and analytical capacity is limited. Keywords: Climate adaptation, Stormwater management, Adaptation pathways

  9. A Model for Capturing Team Adaptation in Simulated Emergencies

    DEFF Research Database (Denmark)

    Paltved, Charlotte; Musaeus, Peter

    2013-01-01

    and conceptualizes team processes through recursive cycles of updates. In the 29 simulation scenarios, 94 updates were recorded. There were between 0 and 8 updates per scenario (mean 3,2). Level five was achieved in 13 scenarios, level four in 8 scenarios and finally, level two and three were achieved in four...... is required to meaningfully account for communication exchanges in context. As such, this theoretical framework might provide a vocabulary for operationalizing the differences between "effective and ineffective" communication. Moving beyond counting communication events or the frequency of certain...

  10. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  11. Efficacy of very fast simulated annealing global optimization method for interpretation of self-potential anomaly by different forward formulation over 2D inclined sheet type structure

    Science.gov (United States)

    Biswas, A.; Sharma, S. P.

    2012-12-01

    Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the

  12. Diseño óptimo de un sistema de distribución de agua (SDA aplicando el algoritmo Simulated Annealing (SA

    Directory of Open Access Journals (Sweden)

    Maikel Méndez-Morales

    2014-09-01

    Full Text Available En este artículo se presenta la aplicación del algoritmo Simulated Annealing (SA en el diseño óptimo de un sistema de distribución de agua (SDA. El SA es un algoritmo metaheurístico de búsqueda, basado en una analogía entre el proceso de recocido en metales (proceso controlado de enfriamiento de un cuerpo y la solución de problemas de optimización combinatorios. El algoritmo SA, junto con diversos modelos matemáticos, ha sido utilizado exitosamente en el óptimo diseño de SDA. Como caso de estudio se utilizó el SDA a escala real de la comunidad de Marsella, en San Carlos, Costa Rica. El algoritmo SA fue implementado mediante el conocido modelo EPANET, a través de la extensión WaterNetGen. Se compararon tres diferentes variaciones automatizadas del algoritmo SA con el diseño manual del SDA Marsella llevado a cabo a prueba y error, utilizando únicamente costos unitarios de tuberías. Los resultados muestran que los tres esquemas automatizados del SA arrojaron costos unitarios por debajo del 0.49 como fracción, respecto al costo original del esquema de diseño ejecutado a prueba y error. Esto demuestra que el algoritmo SA es capaz de optimizar problemas combinatorios ligados al diseño de mínimo costo de los sistemas de distribución de agua a escala real.

  13. An adaptative finite element method for turbulent flow simulations

    International Nuclear Information System (INIS)

    Arnoux-Guisse, F.; Bonnin, O.; Leal de Sousa, L.; Nicolas, G.

    1995-05-01

    After outlining the space and time discretization methods used in the N3S thermal hydraulic code developed at EDF/NHL, we describe the possibilities of the peripheral version, the Adaptative Mesh, which comprises two separate parts: the error indicator computation and the development of a module subdividing elements usable by the solid dynamics code ASTER and the electromagnetism code TRIFOU also developed by R and DD. The error indicators implemented in N3S are described. They consist of a projection indicator quantifying the space error in laminar or turbulent flow calculations and a Navier-Stokes residue indicator calculated on each element. The method for subdivision of triangles into four sub-triangles and tetrahedra into eight sub-tetrahedra is then presented with its advantages and drawbacks. It is illustrated by examples showing the efficiency of the module. The last concerns the 2 D case of flow behind a backward-facing step. (authors). 9 refs., 5 figs., 1 tab

  14. Cluster Optimization and Parallelization of Simulations with Dynamically Adaptive Grids

    KAUST Repository

    Schreiber, Martin; Weinzierl, Tobias; Bungartz, Hans-Joachim

    2013-01-01

    The present paper studies solvers for partial differential equations that work on dynamically adaptive grids stemming from spacetrees. Due to the underlying tree formalism, such grids efficiently can be decomposed into connected grid regions (clusters) on-the-fly. A graph on those clusters classified according to their grid invariancy, workload, multi-core affinity, and further meta data represents the inter-cluster communication. While stationary clusters already can be handled more efficiently than their dynamic counterparts, we propose to treat them as atomic grid entities and introduce a skip mechanism that allows the grid traversal to omit those regions completely. The communication graph ensures that the cluster data nevertheless are kept consistent, and several shared memory parallelization strategies are feasible. A hyperbolic benchmark that has to remesh selected mesh regions iteratively to preserve conforming tessellations acts as benchmark for the present work. We discuss runtime improvements resulting from the skip mechanism and the implications on shared memory performance and load balancing. © 2013 Springer-Verlag.

  15. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  16. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  17. Numerical simulation of supersonic over/under expanded jets using adaptive grid

    International Nuclear Information System (INIS)

    Talebi, S.; Shirani, E.

    2001-05-01

    Numerical simulation of supersonic under and over expanded jet was simulated. In order to achieve the solution efficiently and with high resolution, adaptive grid is used. The axisymmetric compressible, time dependent Navier-Stokes equations in body fitted curvilinear coordinate were solved numerically. The equations were discretized by using control volume, and the Van Leer flux splitting approach. The equations were solved implicitly. The obtained computer code was used to simulate four different cases of moderate and strong under and over expanded jet flows. The results show that with the adaptation of the grid, the various features of this complicated flow can be observed. It was shown that the adaptation method is very efficient and has the ability to make fine grids near the high gradient regions. (author)

  18. Adaptive complementary fuzzy self-recurrent wavelet neural network controller for the electric load simulator system

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2016-03-01

    Full Text Available Due to the complexities existing in the electric load simulator, this article develops a high-performance nonlinear adaptive controller to improve the torque tracking performance of the electric load simulator, which mainly consists of an adaptive fuzzy self-recurrent wavelet neural network controller with variable structure (VSFSWC and a complementary controller. The VSFSWC is clearly and easily used for real-time systems and greatly improves the convergence rate and control precision. The complementary controller is designed to eliminate the effect of the approximation error between the proposed neural network controller and the ideal feedback controller without chattering phenomena. Moreover, adaptive learning laws are derived to guarantee the system stability in the sense of the Lyapunov theory. Finally, the hardware-in-the-loop simulations are carried out to verify the feasibility and effectiveness of the proposed algorithms in different working styles.

  19. Modernizing quantum annealing using local searches

    International Nuclear Information System (INIS)

    Chancellor, Nicholas

    2017-01-01

    I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques. (paper)

  20. Using statistical sensitivities for adaptation of a best-estimate thermo-hydraulic simulation model

    International Nuclear Information System (INIS)

    Liu, X.J.; Kerner, A.; Schaefer, A.

    2010-01-01

    On-line adaptation of best-estimate simulations of NPP behaviour to time-dependent measurement data can be used to insure that simulations performed in parallel to plant operation develop synchronously with the real plant behaviour even over extended periods of time. This opens a range of applications including operator support in non-standard-situations, improving diagnostics and validation of measurements in real plants or experimental facilities. A number of adaptation methods have been proposed and successfully applied to control problems. However, these methods are difficult to be applied to best-estimate thermal-hydraulic codes, such as TRACE and ATHLET, with their large nonlinear differential equation systems and sophisticated time integration techniques. This paper presents techniques to use statistical sensitivity measures to overcome those problems by reducing the number of parameters subject to adaptation. It describes how to identify the most significant parameters for adaptation and how this information can be used by combining: -decomposition techniques splitting the system into a small set of component parts with clearly defined interfaces where boundary conditions can be derived from the measurement data, -filtering techniques to insure that the time frame for adaptation is meaningful, -numerical sensitivities to find minimal error conditions. The suitability of combining those techniques is shown by application to an adaptive simulation of the PKL experiment.

  1. Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media

    KAUST Repository

    Chueh, C.C.

    2010-10-01

    An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.

  2. An adaptive DES smodel that allows wall-resolved eddy simulation

    International Nuclear Information System (INIS)

    Yin, Zifei; Durbin, Paul A.

    2016-01-01

    Highlights: • A Detached Eddy Simulation model that mimics dynamic Smagorinsky formulation. • Adaptivity of model allows wall resolved eddy simulation on sufficient grids. • Ability to simulate natural and bypass transition is tested. - Abstract: A modification to the Adaptive-DES method of Yin et al. (2015) is proposed to improve its near-wall behavior. The modification is to the function (C_l_i_m) that imposes a lower limit on the dynamically evaluated coefficient (C_D_E_S). The modification allows Adaptive-DES to converge to wall-resolved eddy simulation, when grid resolution supports it. On coarse grids, or at high Reynolds number, it reverts to shielded DES — that is to DDES. The new formulation predicts results closer to wall-resolved LES than the previous formulation. It provides an ability to simulate transition: it is tested in both orderly and bypass transition. In fully turbulent, attached flow, the modification has little effect. Any improvement in predictions stem from better near-wall behavior of the adaptive method.

  3. Dynamically adaptive Lattice Boltzmann simulation of shallow water flows with the Peano framework

    KAUST Repository

    Neumann, Philipp

    2015-09-01

    © 2014 Elsevier Inc. All rights reserved. We present a dynamically adaptive Lattice Boltzmann (LB) implementation for solving the shallow water equations (SWEs). Our implementation extends an existing LB component of the Peano framework. We revise the modular design with respect to the incorporation of new simulation aspects and LB models. The basic SWE-LB implementation is validated in different breaking dam scenarios. We further provide a numerical study on stability of the MRT collision operator used in our simulations.

  4. Improving the adaptability of simulated evolutionary swarm robots in dynamically changing environments.

    Directory of Open Access Journals (Sweden)

    Yao Yao

    Full Text Available One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN. An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store 'good behaviour' and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment.

  5. Improving the Adaptability of Simulated Evolutionary Swarm Robots in Dynamically Changing Environments

    Science.gov (United States)

    Yao, Yao; Marchal, Kathleen; Van de Peer, Yves

    2014-01-01

    One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN). An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store ‘good behaviour’ and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment. PMID:24599485

  6. Simulating streamer discharges in 3D with the parallel adaptive Afivo framework

    NARCIS (Netherlands)

    H.J. Teunissen (Jannis); U. M. Ebert (Ute)

    2017-01-01

    htmlabstractWe present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges, based on the Afivo framework that features adaptive mesh refinement, geometric multigrid methods for Poisson's equation, and OpenMP parallelism. We describe the numerical

  7. Largenet2: an object-oriented programming library for simulating large adaptive networks.

    Science.gov (United States)

    Zschaler, Gerd; Gross, Thilo

    2013-01-15

    The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://biond.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. gerd@biond.org

  8. Simulation Research on Adaptive Control of a Six-degree-of-freedom Material-testing Machine

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2014-02-01

    Full Text Available This paper presents an adaptive controller equipped with a stiffness estimation method for a novel material-testing machine, in order to alleviate the performance depression caused by the stiffness variance of the tested specimen. The dynamic model of the proposed machine is built using the Kane method, and kinematic model is established with a closed-form solution. The stiffness estimation method is developed based on the recursive least-squares method and the proposed stiffness equivalent matrix. Control performances of the adaptive controller are simulated in detail. The simulation results illustrate that the proposed controller can greatly improve the control performance of the target material-testing machine by online stiffness estimation and adaptive parameter tuning, especially in low-cycle fatigue (LCF and high-cycle fatigue (HCF tests.

  9. Comments on "Adaptive resolution simulation in equilibrium and beyond" by H. Wang and A. Agarwal

    Science.gov (United States)

    Klein, R.

    2015-09-01

    Wang and Agarwal (Eur. Phys. J. Special Topics, this issue, 2015, doi: 10.1140/epjst/e2015-02411-2) discuss variants of Adaptive Resolution Molecular Dynamics Simulations (AdResS), and their applications. Here we comment on their report, addressing scaling properties of the method, artificial forcings implemented to ensure constant density across the full simulation despite changing thermodynamic properties of the simulated media, the possible relation between an AdResS system on the one hand and a phase transition phenomenon on the other, and peculiarities of the SPC/E water model.

  10. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    Science.gov (United States)

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  11. Adaptive constructive processes and memory accuracy: consequences of counterfactual simulations in young and older adults.

    Science.gov (United States)

    Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L

    2014-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.

  12. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    International Nuclear Information System (INIS)

    Bargalló, Enric; Sureda, Pere Joan; Arroyo, Jose Manuel; Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown

  13. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.

  14. Control of suspended low-gravity simulation system based on self-adaptive fuzzy PID

    Science.gov (United States)

    Chen, Zhigang; Qu, Jiangang

    2017-09-01

    In this paper, an active suspended low-gravity simulation system is proposed to follow the vertical motion of the spacecraft. Firstly, working principle and mathematical model of the low-gravity simulation system are shown. In order to establish the balance process and suppress the strong position interference of the system, the idea of self-adaptive fuzzy PID control strategy is proposed. It combines the PID controller with a fuzzy controll strategy, the control system can be automatically adjusted by changing the proportional parameter, integral parameter and differential parameter of the controller in real-time. At last, we use the Simulink tools to verify the performance of the controller. The results show that the system can reach balanced state quickly without overshoot and oscillation by the method of the self-adaptive fuzzy PID, and follow the speed of 3m/s, while simulation degree of accuracy of system can reach to 95.9% or more.

  15. Adaptive grids and numerical fluid simulations for scrape-off layer plasmas

    International Nuclear Information System (INIS)

    Klingshirn, Hans-Joachim

    2010-01-01

    Magnetic confinement nuclear fusion experiments create plasmas with local temperatures in excess of 100 million Kelvin. In these experiments the scrape-off layer, which is the plasma region in direct contact with the device wall, is of central importance both for the quality of the energy confinement and the wall material lifetime. To study the behaviour of the scrape-off layer, in addition to experiments, numerical simulations are used. This work investigates the use of adaptive discretizations of space and compatible numerical methods for scrape-off layer simulations. The resulting algorithms allow dynamic adaptation of computational grids aligned to the magnetic fields to precisely capture the strongly anisotropic energy and particle transport in the plasma. The methods are applied to the multi-fluid plasma code B2, with the goal of reducing the runtime of simulations and extending the applicability of the code.

  16. Modeling and simulating the adaptive electrical properties of stochastic polymeric 3D networks

    International Nuclear Information System (INIS)

    Sigala, R; Smerieri, A; Camorani, P; Schüz, A; Erokhin, V

    2013-01-01

    Memristors are passive two-terminal circuit elements that combine resistance and memory. Although in theory memristors are a very promising approach to fabricate hardware with adaptive properties, there are only very few implementations able to show their basic properties. We recently developed stochastic polymeric matrices with a functionality that evidences the formation of self-assembled three-dimensional (3D) networks of memristors. We demonstrated that those networks show the typical hysteretic behavior observed in the ‘one input-one output’ memristive configuration. Interestingly, using different protocols to electrically stimulate the networks, we also observed that their adaptive properties are similar to those present in the nervous system. Here, we model and simulate the electrical properties of these self-assembled polymeric networks of memristors, the topology of which is defined stochastically. First, we show that the model recreates the hysteretic behavior observed in the real experiments. Second, we demonstrate that the networks modeled indeed have a 3D instead of a planar functionality. Finally, we show that the adaptive properties of the networks depend on their connectivity pattern. Our model was able to replicate fundamental qualitative behavior of the real organic 3D memristor networks; yet, through the simulations, we also explored other interesting properties, such as the relation between connectivity patterns and adaptive properties. Our model and simulations represent an interesting tool to understand the very complex behavior of self-assembled memristor networks, which can finally help to predict and formulate hypotheses for future experiments. (paper)

  17. Design of a Mobile Agent-Based Adaptive Communication Middleware for Federations of Critical Infrastructure Simulations

    Science.gov (United States)

    Görbil, Gökçe; Gelenbe, Erol

    The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.

  18. A New Simulation Technique for Study of Collisionless Shocks: Self-Adaptive Simulations

    International Nuclear Information System (INIS)

    Karimabadi, H.; Omelchenko, Y.; Driscoll, J.; Krauss-Varban, D.; Fujimoto, R.; Perumalla, K.

    2005-01-01

    The traditional technique for simulating physical systems modeled by partial differential equations is by means of time-stepping methodology where the state of the system is updated at regular discrete time intervals. This method has inherent inefficiencies. In contrast to this methodology, we have developed a new asynchronous type of simulation based on a discrete-event-driven (as opposed to time-driven) approach, where the simulation state is updated on a 'need-to-be-done-only' basis. Here we report on this new technique, show an example of particle acceleration in a fast magnetosonic shockwave, and briefly discuss additional issues that we are addressing concerning algorithm development and parallel execution

  19. Adaptation.

    Science.gov (United States)

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  20. Scale-adaptive simulation of a hot jet in cross flow

    Energy Technology Data Exchange (ETDEWEB)

    Duda, B M; Esteve, M-J [AIRBUS Operations S.A.S., Toulouse (France); Menter, F R; Hansen, T, E-mail: benjamin.duda@airbus.com [ANSYS Germany GmbH, Otterfing (Germany)

    2011-12-22

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  1. Scale-adaptive simulation of a hot jet in cross flow

    International Nuclear Information System (INIS)

    Duda, B M; Esteve, M-J; Menter, F R; Hansen, T

    2011-01-01

    The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.

  2. Image-guided adaptive gating of lung cancer radiotherapy: a computer simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Aristophanous, Michalis; Rottmann, Joerg; Park, Sang-June; Berbeco, Ross I [Department of Radiation Oncology, Brigham and Women' s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA (United States); Nishioka, Seiko [Department of Radiology, NTT Hospital, Sapporo (Japan); Shirato, Hiroki, E-mail: maristophanous@lroc.harvard.ed [Department of Radiation Medicine, Hokkaido University School of Medicine, Sapporo (Japan)

    2010-08-07

    The purpose of this study is to investigate the effect that image-guided adaptation of the gating window during treatment could have on the residual tumor motion, by simulating different gated radiotherapy techniques. There are three separate components of this simulation: (1) the 'Hokkaido Data', which are previously measured 3D data of lung tumor motion tracks and the corresponding 1D respiratory signals obtained during the entire ungated radiotherapy treatments of eight patients, (2) the respiratory gating protocol at our institution and the imaging performed under that protocol and (3) the actual simulation in which the Hokkaido Data are used to select tumor position information that could have been collected based on the imaging performed under our gating protocol. We simulated treatments with a fixed gating window and a gating window that is updated during treatment. The patient data were divided into different fractions, each with continuous acquisitions longer than 2 min. In accordance to the imaging performed under our gating protocol, we assume that we have tumor position information for the first 15 s of treatment, obtained from kV fluoroscopy, and for the rest of the fractions the tumor position is only available during the beam-on time from MV imaging. The gating window was set according to the information obtained from the first 15 s such that the residual motion was less than 3 mm. For the fixed gating window technique the gate remained the same for the entire treatment, while for the adaptive technique the range of the tumor motion during beam-on time was measured and used to adapt the gating window to keep the residual motion below 3 mm. The algorithm used to adapt the gating window is described. The residual tumor motion inside the gating window was reduced on average by 24% for the patients with regular breathing patterns and the difference was statistically significant (p-value = 0.01). The magnitude of the residual tumor motion

  3. PULSim: User-Based Adaptable Simulation Tool for Railway Planning and Operations

    Directory of Open Access Journals (Sweden)

    Yong Cui

    2018-01-01

    Full Text Available Simulation methods are widely used in the field of railway planning and operations. Currently, several commercial software tools are available that not only provide functionality for railway simulation but also enable further evaluation and optimisation of the network for scheduling, dispatching, and capacity research. However, the various tools are all lacking with respect to the standards they utilise as well as their published interfaces. For an end-user, the basic mechanism and the assumptions built into a simulation tool are unknown, which means that the true potential of these software tools is limited. One of the most critical issues is the lack of the ability of users to define a sophisticated workflow, integrated in several rounds of simulation with adjustable parameters and settings. This paper develops and describes a user-based, customisable platform. As the preconditions of the platform, the design aspects for modelling the components of a railway system and building the workflow of railway simulation are elaborated in detail. Based on the model and the workflow, an integrated simulation platform with open interfaces is developed. Users and researchers gain the ability to rapidly develop their own algorithms, supported by the tailored simulation process in a flexible manner. The productivity of using simulation tools for further evaluation and optimisation will be significantly improved through the user-adaptable open interfaces.

  4. Adaptation

    International Development Research Centre (IDRC) Digital Library (Canada)

    building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.

  5. An adaptive transmission protocol for managing dynamic shared states in collaborative surgical simulation.

    Science.gov (United States)

    Qin, J; Choi, K S; Ho, Simon S M; Heng, P A

    2008-01-01

    A force prediction algorithm is proposed to facilitate virtual-reality (VR) based collaborative surgical simulation by reducing the effect of network latencies. State regeneration is used to correct the estimated prediction. This algorithm is incorporated into an adaptive transmission protocol in which auxiliary features such as view synchronization and coupling control are equipped to ensure the system consistency. We implemented this protocol using multi-threaded technique on a cluster-based network architecture.

  6. Integration of adaptive optics into highEnergy laser modeling and simulation

    Science.gov (United States)

    2017-06-01

    contain hundreds of actuators with high control bandwidths and low hysteresis, all of which are ideal parameters for accurate reconstruction of higher... Available : https://web.archive.org/web/20110111093235/http: //csis.org/blog/missile-defense-umbrella [10] C. Kopp, “ High energy laser directed energy...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS INTEGRATION OF ADAPTIVE OPTICS INTO HIGH ENERGY LASER MODELING AND SIMULATION by Donald Puent

  7. 3D design and electric simulation of a silicon drift detector using a spiral biasing adapter

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yu-yun; Xiong, Bo [School of Materials Science and Engineering, Xiangtan University, Xiangtan 411105 (China); Center for Semiconductor Particle and photon Imaging Detector, Development and Fabrication, Xiangtan University, Xiangtan 411105 (China); Li, Zheng, E-mail: zhengli58@gmail.com [School of Materials Science and Engineering, Xiangtan University, Xiangtan 411105 (China); Center for Semiconductor Particle and photon Imaging Detector, Development and Fabrication, Xiangtan University, Xiangtan 411105 (China)

    2016-09-21

    The detector system of combining a spiral biasing adapter (SBA) with a silicon drift detector (SBA-SDD) is largely different from the traditional silicon drift detector (SDD), including the spiral SDD. It has a spiral biasing adapter of the same design as a traditional spiral SDD and an SDD with concentric rings having the same radius. Compared with the traditional spiral SDD, the SBA-SDD separates the spiral's functions of biasing adapter and the p–n junction definition. In this paper, the SBA-SDD is simulated using a Sentaurus TCAD tool, which is a full 3D device simulation tool. The simulated electric characteristics include electric potential, electric field, electron concentration, and single event effect. Because of the special design of the SBA-SDD, the SBA can generate an optimum drift electric field in the SDD, comparable with the conventional spiral SDD, while the SDD can be designed with concentric rings to reduce surface area. Also the current and heat generated in the SBA are separated from the SDD. To study the single event response, we simulated the induced current caused by incident heavy ions (20 and 50 μm penetration length) with different linear energy transfer (LET). The SBA-SDD can be used just like a conventional SDD, such as X-ray detector for energy spectroscopy and imaging, etc. - Highlights: • The separation of the spiral biasing adapter and SDD is a new concept. • The distribution of the electric potential is symmetrical around the axis through the anode. • The region with higher electron concentrations defines the drift channel.

  8. Infrared thermal annealing device

    International Nuclear Information System (INIS)

    Gladys, M.J.; Clarke, I.; O'Connor, D.J.

    2003-01-01

    A device for annealing samples within an ultrahigh vacuum (UHV) scanning tunneling microscopy system was designed, constructed, and tested. The device is based on illuminating the sample with infrared radiation from outside the UHV chamber with a tungsten projector bulb. The apparatus uses an elliptical mirror to focus the beam through a sapphire viewport for low absorption. Experiments were conducted on clean Pd(100) and annealing temperatures in excess of 1000 K were easily reached

  9. Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.

  10. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  11. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  12. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  13. Online body schema adaptation based on internal mental simulation and multisensory feedback

    Directory of Open Access Journals (Sweden)

    Pedro eVicente

    2016-03-01

    Full Text Available In this paper, we describe a novel approach to obtain automatic adaptation of the robot body schema and to improve the robot perceptual and motor skills based on this body knowledge. Predictions obtained through a mental simulation of the body are combined with the real sensory feedback to achieve two objectives simultaneously: body schema adaptation and markerless 6D hand pose estimation. The body schema consists of a computer graphics simulation of the robot, which includes the arm and head kinematics (adapted online during the movements and an appearance model of the hand shape and texture. The mental simulation process generates predictions on how the hand will appear in the robot camera images, based on the body schema and the proprioceptive information (i.e. motor encoders. These predictions are compared to the actual images using Sequential Monte Carlo techniques to feed a particle-based Bayesian estimation method to estimate the parameters of the body schema. The updated body schema will improve the estimates of the 6D hand pose, which is thenused in a closed-loop control scheme (i.e. visual servoing, enabling precise reaching. We report experiments with the iCub humanoid robot that support the validity of our approach. A number of simulations with precise ground-truth were performed to evaluate the estimation capabilities of the proposed framework. Then, we show how the use of high-performance GPU programming and an edge-based algorithm for visual perception allow for real-time implementation in real world scenarios.

  14. Adapt

    Science.gov (United States)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  15. Nonlinear adaptive robust back stepping force control of hydraulic load simulator: Theory and experiments

    International Nuclear Information System (INIS)

    Yao, Jianyong; Jiao, Zongxia; Yao, Bin

    2014-01-01

    High performance robust force control of hydraulic load simulator with constant but unknown hydraulic parameters is considered. In contrast to the linear control based on hydraulic linearization equations, hydraulic inherent nonlinear properties and uncertainties make the conventional feedback proportional-integral-derivative (PID) control not yield to high performance requirements. Furthermore, the hydraulic system may be subjected to non-smooth and discontinuous nonlinearities due to the directional change of valve opening. In this paper, based on a nonlinear system model of hydraulic load simulator, a discontinuous projection-based nonlinear adaptive robust back stepping controller is developed with servo valve dynamics. The proposed controller constructs a novel stable adaptive controller and adaptation laws with additional pressure dynamic related unknown parameters, which can compensate for the system nonlinearities and uncertain parameters, meanwhile a well-designed robust controller is also synthesized to dominate the model uncertainties coming from both parametric uncertainties and uncertain nonlinearities including unmodeled and ignored system dynamics. The controller theoretically guarantee a prescribed transient performance and final tracking accuracy in presence of both parametric uncertainties and uncertain nonlinearities; while achieving asymptotic output tracking in the absence of unstructured uncertainties. The implementation issues are also discussed for controller simplification. Some comparative results are obtained to verify the high-performance nature of the proposed controller.

  16. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    Science.gov (United States)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  17. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    Science.gov (United States)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  18. Human spaceflight and space adaptations: Computational simulation of gravitational unloading on the spine

    Science.gov (United States)

    Townsend, Molly T.; Sarigul-Klijn, Nesrin

    2018-04-01

    Living in reduced gravitational environments for a prolonged duration such, as a fly by mission to Mars or an extended stay at the international space station, affects the human body - in particular, the spine. As the spine adapts to spaceflight, morphological and physiological changes cause the mechanical integrity of the spinal column to be compromised, potentially endangering internal organs, nervous health, and human body mechanical function. Therefore, a high fidelity computational model and simulation of the whole human spine was created and validated for the purpose of investigating the mechanical integrity of the spine in crew members during exploratory space missions. A spaceflight exposed spine has been developed through the adaptation of a three-dimensional nonlinear finite element model with the updated Lagrangian formulation of a healthy ground-based human spine in vivo. Simulation of the porohyperelastic response of the intervertebral disc to mechanical unloading resulted in a model capable of accurately predicting spinal swelling/lengthening, spinal motion, and internal stress distribution. The curvature of this space adaptation exposed spine model was compared to a control terrestrial-based finite element model, indicating how the shape changed. Finally, the potential of injury sites to crew members are predicted for a typical 9 day mission.

  19. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  20. Nonlinear adaptive robust back stepping force control of hydraulic load simulator: Theory and experiments

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jianyong [Nanjing University of Science and Technology, Nanjing (China); Jiao, Zongxia [Beihang University, Beijing (China); Yao, Bin [Purdue University, West Lafayette (United States)

    2014-04-15

    High performance robust force control of hydraulic load simulator with constant but unknown hydraulic parameters is considered. In contrast to the linear control based on hydraulic linearization equations, hydraulic inherent nonlinear properties and uncertainties make the conventional feedback proportional-integral-derivative (PID) control not yield to high performance requirements. Furthermore, the hydraulic system may be subjected to non-smooth and discontinuous nonlinearities due to the directional change of valve opening. In this paper, based on a nonlinear system model of hydraulic load simulator, a discontinuous projection-based nonlinear adaptive robust back stepping controller is developed with servo valve dynamics. The proposed controller constructs a novel stable adaptive controller and adaptation laws with additional pressure dynamic related unknown parameters, which can compensate for the system nonlinearities and uncertain parameters, meanwhile a well-designed robust controller is also synthesized to dominate the model uncertainties coming from both parametric uncertainties and uncertain nonlinearities including unmodeled and ignored system dynamics. The controller theoretically guarantee a prescribed transient performance and final tracking accuracy in presence of both parametric uncertainties and uncertain nonlinearities; while achieving asymptotic output tracking in the absence of unstructured uncertainties. The implementation issues are also discussed for controller simplification. Some comparative results are obtained to verify the high-performance nature of the proposed controller.

  1. Selective adaptation in networks of heterogeneous populations: model, simulation, and experiment.

    Directory of Open Access Journals (Sweden)

    Avner Wallach

    2008-02-01

    Full Text Available Biological systems often change their responsiveness when subject to persistent stimulation, a phenomenon termed adaptation. In neural systems, this process is often selective, allowing the system to adapt to one stimulus while preserving its sensitivity to another. In some studies, it has been shown that adaptation to a frequent stimulus increases the system's sensitivity to rare stimuli. These phenomena were explained in previous work as a result of complex interactions between the various subpopulations of the network. A formal description and analysis of neuronal systems, however, is hindered by the network's heterogeneity and by the multitude of processes taking place at different time-scales. Viewing neural networks as populations of interacting elements, we develop a framework that facilitates a formal analysis of complex, structured, heterogeneous networks. The formulation developed is based on an analysis of the availability of activity dependent resources, and their effects on network responsiveness. This approach offers a simple mechanistic explanation for selective adaptation, and leads to several predictions that were corroborated in both computer simulations and in cultures of cortical neurons developing in vitro. The framework is sufficiently general to apply to different biological systems, and was demonstrated in two different cases.

  2. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  3. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  4. Adaptive learning in agents behaviour: A framework for electricity markets simulation

    DEFF Research Database (Denmark)

    Pinto, Tiago; Vale, Zita; Sousa, Tiago M.

    2014-01-01

    decision support to MASCEM's negotiating agents so that they can properly achieve their goals. ALBidS uses artificial intelligence methodologies and data analysis algorithms to provide effective adaptive learning capabilities to such negotiating entities. The main contribution is provided by a methodology...... that combines several distinct strategies to build actions proposals, so that the best can be chosen at each time, depending on the context and simulation circumstances. The choosing process includes reinforcement learning algorithms, a mechanism for negotiating contexts analysis, a mechanism for the management...... allows integrating different strategic approaches for electricity market negotiations, and choosing the most appropriate one at each time, for each different negotiation context. This methodology is integrated in ALBidS (Adaptive Learning strategic Bidding System) – a multiagent system that provides...

  5. Direct numerical simulations of particle-laden density currents with adaptive, discontinuous finite elements

    Directory of Open Access Journals (Sweden)

    S. D. Parkinson

    2014-09-01

    Full Text Available High-resolution direct numerical simulations (DNSs are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier–Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.

  6. Molecular Dynamics Simulations with Quantum Mechanics/Molecular Mechanics and Adaptive Neural Networks.

    Science.gov (United States)

    Shen, Lin; Yang, Weitao

    2018-03-13

    Direct molecular dynamics (MD) simulation with ab initio quantum mechanical and molecular mechanical (QM/MM) methods is very powerful for studying the mechanism of chemical reactions in a complex environment but also very time-consuming. The computational cost of QM/MM calculations during MD simulations can be reduced significantly using semiempirical QM/MM methods with lower accuracy. To achieve higher accuracy at the ab initio QM/MM level, a correction on the existing semiempirical QM/MM model is an attractive idea. Recently, we reported a neural network (NN) method as QM/MM-NN to predict the potential energy difference between semiempirical and ab initio QM/MM approaches. The high-level results can be obtained using neural network based on semiempirical QM/MM MD simulations, but the lack of direct MD samplings at the ab initio QM/MM level is still a deficiency that limits the applications of QM/MM-NN. In the present paper, we developed a dynamic scheme of QM/MM-NN for direct MD simulations on the NN-predicted potential energy surface to approximate ab initio QM/MM MD. Since some configurations excluded from the database for NN training were encountered during simulations, which may cause some difficulties on MD samplings, an adaptive procedure inspired by the selection scheme reported by Behler [ Behler Int. J. Quantum Chem. 2015 , 115 , 1032 ; Behler Angew. Chem., Int. Ed. 2017 , 56 , 12828 ] was employed with some adaptions to update NN and carry out MD iteratively. We further applied the adaptive QM/MM-NN MD method to the free energy calculation and transition path optimization on chemical reactions in water. The results at the ab initio QM/MM level can be well reproduced using this method after 2-4 iteration cycles. The saving in computational cost is about 2 orders of magnitude. It demonstrates that the QM/MM-NN with direct MD simulations has great potentials not only for the calculation of thermodynamic properties but also for the characterization of

  7. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    International Nuclear Information System (INIS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-01-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations

  8. Cyberwar XXI: quantifying the unquantifiable: adaptive AI for next-generation conflict simulations

    Science.gov (United States)

    Miranda, Joseph; von Kleinsmid, Peter; Zalewski, Tony

    2004-08-01

    The era of the "Revolution in Military Affairs," "4th Generation Warfare" and "Asymmetric War" requires novel approaches to modeling warfare at the operational and strategic level of modern conflict. For example, "What if, in response to our planned actions, the adversary reacts in such-and-such a manner? What will our response be? What are the possible unintended consequences?" Next generation conflict simulation tools are required to help create and test novel courses of action (COA's) in support of real-world operations. Conflict simulations allow non-lethal and cost-effective exploration of the "what-if" of COA development. The challenge has been to develop an automated decision-support software tool which allows competing COA"s to be compared in simulated dynamic environments. Principal Investigator Joseph Miranda's research is based on modeling an integrated military, economic, social, infrastructure and information (PMESII) environment. The main effort was to develop an adaptive AI engine which models agents operating within an operational-strategic conflict environment. This was implemented in Cyberwar XXI - a simulation which models COA selection in a PMESII environment. Within this framework, agents simulate decision-making processes and provide predictive capability of the potential behavior of Command Entities. The 2003 Iraq is the first scenario ready for V&V testing.

  9. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    Energy Technology Data Exchange (ETDEWEB)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  10. A novel double-convection chaotic attractor, its adaptive control and circuit simulation

    Science.gov (United States)

    Mamat, M.; Vaidyanathan, S.; Sambas, A.; Mujiarto; Sanjaya, W. S. M.; Subiyanto

    2018-03-01

    A 3-D novel double-convection chaotic system with three nonlinearities is proposed in this research work. The dynamical properties of the new chaotic system are described in terms of phase portraits, Lyapunov exponents, Kaplan-Yorke dimension, dissipativity, stability analysis of equilibria, etc. Adaptive control and synchronization of the new chaotic system with unknown parameters are achieved via nonlinear controllers and the results are established using Lyapunov stability theory. Furthermore, an electronic circuit realization of the new 3-D novel chaotic system is presented in detail. Finally, the circuit experimental results of the 3-D novel chaotic attractor show agreement with the numerical simulations.

  11. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    International Nuclear Information System (INIS)

    Samtaney, S.; Jardin, S.C.; Colella, P.; Martin, D.F.

    2003-01-01

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized

  12. Adaptive unstructured simulations of diaphragm rupture and perforation opening to start hypersonic air inlets

    International Nuclear Information System (INIS)

    Timofeev, E.V.; Tahir, R.B.; Voinovich, P.A.; Moelder, S.

    2004-01-01

    The concept of 'twin' grid nodes is discussed in the context of unstructured, adaptive meshes that are suitable for highly unsteady flows. The concept is applicable to internal boundary contours (within the computational domain) where the boundary conditions may need to be changed dynamically; for instance, an impermeable solid wall segment can be redefined as a fully permeable invisible boundary segment during the course of the simulation. This can be used to simulate unsteady gas flows with internal boundaries where the flow conditions may change rapidly and drastically. As a demonstration, the idea is applied to study the starting process in hypersonic air inlets by rupturing a diaphragm or by opening wall-perforations. (author)

  13. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    Science.gov (United States)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  14. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  15. Quantum annealing for combinatorial clustering

    Science.gov (United States)

    Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph

    2018-02-01

    Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.

  16. Refined adaptive optics simulation with wide field of view for the E-ELT

    International Nuclear Information System (INIS)

    Chebbo, Manal

    2012-01-01

    Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph

  17. Adaptive Optics Simulation for the World's Largest Telescope on Multicore Architectures with Multiple GPUs

    KAUST Repository

    Ltaief, Hatem

    2016-06-02

    We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used as an operational testbed for simulating the de- sign of new instruments for the European Extremely Large Telescope project (E-ELT), the world\\'s biggest eye and one of Europe\\'s highest priorities in ground-based astronomy. The simulation corresponds to a multi-step multi-stage pro- cedure, which is fed, near real-time, by system and turbulence data coming from the telescope environment. Based on the PLASMA library powered by the OmpSs dynamic runtime system, our implementation relies on a task-based programming model to permit an asynchronous out-of-order execution. Using modern multicore architectures associated with the enormous computing power of GPUS, the resulting data-driven compute-intensive simulation of the entire MOAO application, composed of the tomographic reconstructor and the observing sequence, is capable of coping with the aforementioned real-time challenge and stands as a reference implementation for the computational astronomy community.

  18. ADAPTIVE MESH REFINEMENT SIMULATIONS OF GALAXY FORMATION: EXPLORING NUMERICAL AND PHYSICAL PARAMETERS

    International Nuclear Information System (INIS)

    Hummels, Cameron B.; Bryan, Greg L.

    2012-01-01

    We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.

  19. Simulating and evaluating an adaptive and integrated traffic lights control system for smart city application

    Science.gov (United States)

    Djuana, E.; Rahardjo, K.; Gozali, F.; Tan, S.; Rambung, R.; Adrian, D.

    2018-01-01

    A city could be categorized as a smart city when the information technology has been developed to the point that the administration could sense, understand, and control every resource to serve its people and sustain the development of the city. One of the smart city aspects is transportation and traffic management. This paper presents a research project to design an adaptive traffic lights control system as a part of the smart system for optimizing road utilization and reducing congestion. Research problems presented include: (1) Congestion in one direction toward an intersection due to dynamic traffic condition from time to time during the day, while the timing cycles in traffic lights system are mostly static; (2) No timing synchronization among traffic lights in adjacent intersections that is causing unsteady flows; (3) Difficulties in traffic condition monitoring on the intersection and the lack of facility for remotely controlling traffic lights. In this research, a simulator has been built to model the adaptivity and integration among different traffic lights controllers in adjacent intersections, and a case study consisting of three sets of intersections along Jalan K. H. Hasyim Ashari has been simulated. It can be concluded that timing slots synchronization among traffic lights is crucial for maintaining a steady traffic flow.

  20. Simulating the performance of adaptive optics techniques on FSO communications through the atmosphere

    Science.gov (United States)

    Martínez, Noelia; Rodríguez Ramos, Luis Fernando; Sodnik, Zoran

    2017-08-01

    The Optical Ground Station (OGS), installed in the Teide Observatory since 1995, was built as part of ESA efforts in the research field of satellite optical communications to test laser telecommunication terminals on board of satellites in Low Earth Orbit and Geostationary Orbit. As far as one side of the link is settled on the Earth, the laser beam (either on the uplink or on the downlink) has to bear with the atmospheric turbulence. Within the framework of designing an Adaptive Optics system to improve the performance of the Free-Space Optical Communications at the OGS, turbulence conditions regarding uplink and downlink have been simulated within the OOMAO (Object-Oriented Matlab Adaptive Optics) Toolbox as well as the possible utilization of a Laser Guide Star to measure the wavefront in this context. Simulations have been carried out by reducing available atmospheric profiles regarding both night-time and day-time measurements and by having into account possible seasonal changes. An AO proposal to reduce atmospheric aberrations and, therefore, ameliorate FSO links performance is presented and analysed in this paper

  1. Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms

    International Nuclear Information System (INIS)

    Talpaert, Arthur

    2017-01-01

    This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr

  2. Adaptation of non-technical skills behavioural markers for delivery room simulation.

    Science.gov (United States)

    Bracco, Fabrizio; Masini, Michele; De Tonetti, Gabriele; Brogioni, Francesca; Amidani, Arianna; Monichino, Sara; Maltoni, Alessandra; Dato, Andrea; Grattarola, Claudia; Cordone, Massimo; Torre, Giancarlo; Launo, Claudio; Chiorri, Carlo; Celleno, Danilo

    2017-03-17

    Simulation in healthcare has proved to be a useful method in improving skills and increasing the safety of clinical operations. The debriefing session, after the simulated scenario, is the core of the simulation, since it allows participants to integrate the experience with the theoretical frameworks and the procedural guidelines. There is consistent evidence for the relevance of non-technical skills (NTS) for the safe and efficient accomplishment of operations. However, the observation, assessment and feedback on these skills is particularly complex, because the process needs expert observers and the feedback is often provided in judgmental and ineffective ways. The aim of this study was therefore to develop and test a set of observation and rating forms for the NTS behavioural markers of multi-professional teams involved in delivery room emergency simulations (MINTS-DR, Multi-professional Inventory for Non-Technical Skills in the Delivery Room). The MINTS-DR was developed by adapting the existing tools and, when needed, by designing new tools according to the literature. We followed a bottom-up process accompanied by interviews and co-design between practitioners and psychology experts. The forms were specific for anaesthetists, gynaecologists, nurses/midwives, assistants, plus a global team assessment tool. We administered the tools in five editions of a simulation training course that involved 48 practitioners. Ratings on usability and usefulness were collected. The mean ratings of the usability and usefulness of the tools were not statistically different to or higher than 4 on a 5-point rating scale. In either case no significant differences were found across professional categories. The MINTS-DR is quick and easy to administer. It is judged to be a useful asset in maximising the learning experience that is provided by the simulation.

  3. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    International Nuclear Information System (INIS)

    Li, Taoran; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q.

    2015-01-01

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery

  4. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Taoran, E-mail: taoran.li.duke@gmail.com; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q. [Department of Radiation Oncology, Duke University Medical Center Durham, North Carolina 27710 (United States)

    2015-01-15

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery

  5. Quality assurance for online adapted treatment plans: benchmarking and delivery monitoring simulation.

    Science.gov (United States)

    Li, Taoran; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q

    2015-01-01

    An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system's performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery. Online adapted plans were

  6. Population annealing: Theory and application in spin glasses

    OpenAIRE

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-01-01

    Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar ...

  7. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    Science.gov (United States)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local

  8. Adaptive resolution simulation of polarizable supramolecular coarse-grained water models

    International Nuclear Information System (INIS)

    Zavadlav, Julija; Praprotnik, Matej; Melo, Manuel N.; Marrink, Siewert J.

    2015-01-01

    Multiscale simulations methods, such as adaptive resolution scheme, are becoming increasingly popular due to their significant computational advantages with respect to conventional atomistic simulations. For these kind of simulations, it is essential to develop accurate multiscale water models that can be used to solvate biophysical systems of interest. Recently, a 4-to-1 mapping was used to couple the bundled-simple point charge water with the MARTINI model. Here, we extend the supramolecular mapping to coarse-grained models with explicit charges. In particular, the two tested models are the polarizable water and big multiple water models associated with the MARTINI force field. As corresponding coarse-grained representations consist of several interaction sites, we couple orientational degrees of freedom of the atomistic and coarse-grained representations via a harmonic energy penalty term. This additional energy term aligns the dipole moments of both representations. We test this coupling by studying the system under applied static external electric field. We show that our approach leads to the correct reproduction of the relevant structural and dynamical properties

  9. RPYFMM: Parallel adaptive fast multipole method for Rotne-Prager-Yamakawa tensor in biomolecular hydrodynamics simulations

    Science.gov (United States)

    Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.

    2018-06-01

    RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.

  10. Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems

    KAUST Repository

    Cotter, Simon L.; Vejchodský , Tomá š; Erban, Radek

    2013-01-01

    Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.

  11. SU-F-J-110: MRI-Guided Single-Session Simulation, Online Adaptation, and Treatment

    International Nuclear Information System (INIS)

    Hill, P; Geurts, M; Mittauer, K; Bayouth, J

    2016-01-01

    Purpose: To develop a combined simulation and treatment workflow for MRI-guided radiation therapy using the ViewRay treatment planning and delivery system. Methods: Several features of the ViewRay MRIdian planning and treatment workflows are used to simulate and treat patients that require emergent radiotherapy. A simple “pre-plan” is created on diagnostic imaging retrieved from radiology PACS, where conformal fields are created to target a volume defined by a physician based on review of the diagnostic images and chart notes. After initial consult in radiation oncology, the patient is brought to the treatment room, immobilized, and imaged in treatment position with a volumetric MR. While the patient rests on the table, the pre-plan is applied to the treatment planning MR and dose is calculated in the treatment geometry. After physician review, modification of the plan may include updating the target definition, redefining fields, or re-balancing beam weights. Once an acceptable treatment plan is finalized and approved, the patient is treated. Results: Careful preparation and judicious choices in the online planning process allow conformal treatment plans to be created and delivered in a single, thirty-minute session. Several advantages have been identified using this process as compared to conventional urgent CT simulation and delivery. Efficiency gains are notable, as physicians appreciate the predictable time commitment and patient waiting time for treatment is decreased. MR guidance in a treatment position offers both enhanced contrast for target delineation and reduction of setup uncertainties. The MRIdian system tools designed for adaptive radiotherapy are particularly useful, enabling plan changes to be made in minutes. Finally, the resulting plans, typically 6 conformal beams, are delivered as quickly as more conventional AP/PA beam arrangements with comparatively superior dose distributions. Conclusion: The ViewRay treatment planning software and

  12. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  13. An adaptive grid refinement strategy for the simulation of negative streamers

    International Nuclear Information System (INIS)

    Montijn, C.; Hundsdorfer, W.; Ebert, U.

    2006-01-01

    The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers

  14. How viral capsids adapt to mismatched cargoes—identifying mechanisms of morphology control with simulations

    Science.gov (United States)

    Elrad, Oren

    2009-03-01

    During the replication of many viruses, hundreds to thousands of protein subunits assemble around the viral nucleic acid to form a protein shell called a capsid. Most viruses form one particular structure with astonishing fidelity; yet, recent experiments demonstrate that capsids can assemble with different sizes and morphologies to accommodate nucleic acids or other cargoes such as functionalized nanoparticles. In this talk, we will explore the mechanisms of simultaneous assembly and cargo encapsidation with a computational model that describes the assembly of icosahedral capsids around functionalized nanoparticles. With this model, we find parameter values for which subunits faithfully form empty capsids with a single morphology, but adaptively assemble into different icosahedral morphologies around nanoparticles with different diameters. Analyzing trajectories in which adaptation is or is not successful sheds light on the mechanisms by which capsid morphology may be controlled in vitro and in vivo, and suggests experiments to test these mechanisms. We compare the simulation results to recent experiments in which Brome Mosaic Virus capsid proteins assemble around functionalized nanoparticles, and describe how future experiments can test the model predictions.

  15. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    Science.gov (United States)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  16. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  17. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    Directory of Open Access Journals (Sweden)

    Yao Yao

    2016-12-01

    Full Text Available We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  18. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment.

    Science.gov (United States)

    Yao, Yao; Storme, Veronique; Marchal, Kathleen; Van de Peer, Yves

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  19. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    Science.gov (United States)

    Yao, Yao; Storme, Veronique; Marchal, Kathleen

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population. PMID:28028477

  20. Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert

    Energy Technology Data Exchange (ETDEWEB)

    Efthimiou, Nikos [Technological Educational Institute of Athens (Greece); Kostou, Theodora; Papadimitroulas, Panagiotis [Technological Educational Institute of Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras (Greece); Charalampos, Tsoumpas [Division of Biomedical Imaging, University of Leeds, Leeds (United Kingdom); Loudos, George [Technological Educational Institute of Athens (Greece)

    2015-05-18

    Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.

  1. Overall simulation of a HTGR plant with the gas adapted MANTA code

    International Nuclear Information System (INIS)

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  2. Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert

    International Nuclear Information System (INIS)

    Efthimiou, Nikos; Kostou, Theodora; Papadimitroulas, Panagiotis; Charalampos, Tsoumpas; Loudos, George

    2015-01-01

    Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.

  3. The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture

    KAUST Repository

    Azdoud, Yan

    2014-04-19

    We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain. © 2014 Springer-Verlag Berlin Heidelberg.

  4. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    Science.gov (United States)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  5. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    Science.gov (United States)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  6. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  7. Reconstruction of X-rays spectra of clinical linear accelerators using the generalized simulated annealing method; Reconstrucao de espectros de raios-X de aceleradores lineares clinicos usando o metodo de recozimento simulado generalizado

    Energy Technology Data Exchange (ETDEWEB)

    Manrique, John Peter O.; Costa, Alessandro M., E-mail: johnp067@usp.br, E-mail: amcosta@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil)

    2016-07-01

    The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR{sub 20/10}. (author)

  8. Design, development, and performance of an adapter for simulation of ocular melanoma patients in supine position for proton beam therapy

    International Nuclear Information System (INIS)

    Daftari, I.; Phillips, T.L.

    2003-01-01

    A patient assembly adapter system for ocular melanoma patient simulation was developed and its performance evaluated. The aim for the construction of the apparatus was to simulate the patients in supine position using a commercial x-ray simulator. The apparatus consists of a base plate, head immobilization holder, patient assembly system that includes fixation light and collimator system. The reproducibility of the repeated fixation was initially tested with a head phantom. Simulation and verification films were studied for seven consecutive patients treated with proton beam therapy. Patient's simulation was performed in a supine position using a dental fixation bite block and a thermoplastic head mask immobilization device with a patient adapter system. Two orthogonal x rays were used to obtain the x, y, and z coordinates of sutured tantalum rings for treatment planning with the EYEPLAN software. The verification films were obtained in treatment position with the fixation light along the central axis of the eye. The results indicate good agreement within 0.5 mm deviations. The results of this investigation showed that the same planning accuracy could be achieved by performing simulation using the adapter described above with a patient in the supine position as that obtained by performing simulation with the patient in the seated, treatment position. The adapter can also be attached to the head of the chair for simulating in the seated position using a fixed x-ray unit. This has three advantages: (1) this will save radiation therapists time; (2) it eliminates the need for arranging access to the treatment room, thus avoiding potential conflicts in treatment room usage; and (3) it allows the use of a commercial simulator

  9. Design, development, and performance of an adapter for simulation of ocular melanoma patients in supine position for proton beam therapy

    Science.gov (United States)

    Daftari, I.; Phillips, T. L.

    2003-06-01

    A patient assembly adapter system for ocular melanoma patient simulation was developed and its performance evaluated. The aim for the construction of the apparatus was to simulate the patients in supine position using a commercial x-ray simulator. The apparatus consists of a base plate, head immobilization holder, patient assembly system that includes fixation light and collimator system. The reproducibility of the repeated fixation was initially tested with a head phantom. Simulation and verification films were studied for seven consecutive patients treated with proton beam therapy. Patient's simulation was performed in a supine position using a dental fixation bite block and a thermoplastic head mask immobilization device with a patient adapter system. Two orthogonal x rays were used to obtain the x, y, and z coordinates of sutured tantalum rings for treatment planning with the EYEPLAN software. The verification films were obtained in treatment position with the fixation light along the central axis of the eye. The results indicate good agreement within 0.5 mm deviations. The results of this investigation showed that the same planning accuracy could be achieved by performing simulation using the adapter described above with a patient in the supine position as that obtained by performing simulation with the patient in the seated, treatment position. The adapter can also be attached to the head of the chair for simulating in the seated position using a fixed x-ray unit. This has three advantages: (1) this will save radiation therapists time; (2) it eliminates the need for arranging access to the treatment room, thus avoiding potential conflicts in treatment room usage; and (3) it allows the use of a commercial simulator.

  10. Fast simulation of transport and adaptive permeability estimation in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Berre, Inga

    2005-07-01

    The focus of the thesis is twofold: Both fast simulation of transport in porous media and adaptive estimation of permeability are considered. A short introduction that motivates the work on these topics is given in Chapter 1. In Chapter 2, the governing equations for one- and two-phase flow in porous media are presented. Overall numerical solution strategies for the two-phase flow model are also discussed briefly. The concepts of streamlines and time-of-flight are introduced in Chapter 3. Methods for computing streamlines and time-of-flight are also presented in this chapter. Subsequently, in Chapters 4 and 5, the focus is on simulation of transport in a time-of-flight perspective. In Chapter 4, transport of fluids along streamlines is considered. Chapter 5 introduces a different viewpoint based on the evolution of isocontours of the fluid saturation. While the first chapters focus on the forward problem, which consists in solving a mathematical model given the reservoir parameters, Chapters 6, 7 and 8 are devoted to the inverse problem of permeability estimation. An introduction to the problem of identifying spatial variability in reservoir permeability by inversion of dynamic production data is given in Chapter 6. In Chapter 7, adaptive multiscale strategies for permeability estimation are discussed. Subsequently, Chapter 8 presents a level-set approach for improving piecewise constant permeability representations. Finally, Chapter 9 summarizes the results obtained in the thesis; in addition, the chapter gives some recommendations and suggests directions for future work. Part II In Part II, the following papers are included in the order they were completed: Paper A: A Streamline Front Tracking Method for Two- and Three-Phase Flow Including Capillary Forces. I. Berre, H. K. Dahle, K. H. Karlsen, and H. F. Nordhaug. In Fluid flow and transport in porous media: mathematical and numerical treatment (South Hadley, MA, 2001), volume 295 of Contemp. Math., pages 49

  11. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    International Nuclear Information System (INIS)

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-01

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.

  12. Implantable collamer lens and femtosecond laser for myopia: comparison using an adaptive optics visual simulator

    Directory of Open Access Journals (Sweden)

    Cari Pérez-Vives

    2014-04-01

    Full Text Available Purpose: To compare optical and visual quality of implantable collamer lens (ICL implantation and femtosecond laser in situ keratomileusis (F-LASIK for myopia. Methods: The CRX1 adaptive optics visual simulator (Imagine Eyes, Orsay, France was used to simulate the wavefront aberration pattern after the two surgical procedures for -3-diopter (D and -6-D myopia. Visual acuity at different contrasts and contrast sensitivities at 10, 20, and 25 cycles/degree (cpd were measured for 3-mm and 5-mm pupils. The modulation transfer function (MTF and point spread function (PSF were calculated for 5-mm pupils. Results: F-LASIK MTF was worse than ICL MTF, which was close to diffraction-limited MTF. ICL cases showed less spread out of PSF than F-LASIK cases. ICL cases showed better visual acuity values than F-LASIK cases for all pupils, contrasts, and myopic treatments (p0.05. For -6-D myopia, however, statistically significant differences in contrast sensitivities were found for both pupils for all evaluated spatial frequencies (p<0.05. Contrast sensitivities were better after ICL implantation than after F-LASIK. Conclusions: ICL implantation and F-LASIK provide good optical and visual quality, although the former provides better outcomes of MTF, PSF, visual acuity, and contrast sensitivity, especially for cases with large refractive errors and pupil sizes. These outcomes are related to the F-LASIK producing larger high-order aberrations.

  13. Reactor pressure vessel thermal annealing

    International Nuclear Information System (INIS)

    Lee, A.D.

    1997-01-01

    The steel plates and/or forgings and welds in the beltline region of a reactor pressure vessel (RPV) are subject to embrittlement from neutron irradiation. This embrittlement causes the fracture toughness of the beltline materials to be less than the fracture toughness of the unirradiated material. Material properties of RPVs that have been irradiated and embrittled are recoverable through thermal annealing of the vessel. The amount of recovery primarily depends on the level of the irradiation embrittlement, the chemical composition of the steel, and the annealing temperature and time. Since annealing is an option for extending the service lives of RPVs or establishing less restrictive pressure-temperature (P-T) limits; the industry, the Department of Energy (DOE) and the Nuclear Regulatory Commission (NRC) have assisted in efforts to determine the viability of thermal annealing for embrittlement recovery. General guidance for in-service annealing is provided in American Society for Testing and Materials (ASTM) Standard E 509-86. In addition, the American Society of Mechanical Engineers (ASME) Code Case N-557 addresses annealing conditions (temperature and duration), temperature monitoring, evaluation of loadings, and non-destructive examination techniques. The NRC thermal annealing rule (10 CFR 50.66) was approved by the Commission and published in the Federal Register on December 19, 1995. The Regulatory Guide on thermal annealing (RG 1.162) was processed in parallel with the rule package and was published on February 15, 1996. RG 1.162 contains a listing of issues that need to be addressed for thermal annealing of an RPV. The RG also provides alternatives for predicting re-embrittlement trends after the thermal anneal has been completed. This paper gives an overview of methodology and recent technical references that are associated with thermal annealing. Results from the DOE annealing prototype demonstration project, as well as NRC activities related to the

  14. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    Science.gov (United States)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  15. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    Science.gov (United States)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  16. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; López-Cámara, Diego

    2012-01-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρ∝r –k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  17. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  18. MRI-based treatment plan simulation and adaptation for ion radiotherapy using a classification-based approach

    International Nuclear Information System (INIS)

    Rank, Christopher M; Tremmel, Christoph; Hünemohr, Nora; Nagel, Armin M; Jäkel, Oliver; Greilich, Steffen

    2013-01-01

    In order to benefit from the highly conformal irradiation of tumors in ion radiotherapy, sophisticated treatment planning and simulation are required. The purpose of this study was to investigate the potential of MRI for ion radiotherapy treatment plan simulation and adaptation using a classification-based approach. Firstly, a voxelwise tissue classification was applied to derive pseudo CT numbers from MR images using up to 8 contrasts. Appropriate MR sequences and parameters were evaluated in cross-validation studies of three phantoms. Secondly, ion radiotherapy treatment plans were optimized using both MRI-based pseudo CT and reference CT and recalculated on reference CT. Finally, a target shift was simulated and a treatment plan adapted to the shift was optimized on a pseudo CT and compared to reference CT optimizations without plan adaptation. The derivation of pseudo CT values led to mean absolute errors in the range of 81 - 95 HU. Most significant deviations appeared at borders between air and different tissue classes and originated from partial volume effects. Simulations of ion radiotherapy treatment plans using pseudo CT for optimization revealed only small underdosages in distal regions of a target volume with deviations of the mean dose of PTV between 1.4 - 3.1% compared to reference CT optimizations. A plan adapted to the target volume shift and optimized on the pseudo CT exhibited a comparable target dose coverage as a non-adapted plan optimized on a reference CT. We were able to show that a MRI-based derivation of pseudo CT values using a purely statistical classification approach is feasible although no physical relationship exists. Large errors appeared at compact bone classes and came from an imperfect distinction of bones and other tissue types in MRI. In simulations of treatment plans, it was demonstrated that these deviations are comparable to uncertainties of a target volume shift of 2 mm in two directions indicating that especially

  19. Composition dependent thermal annealing behaviour of ion tracks in apatite

    Energy Technology Data Exchange (ETDEWEB)

    Nadzri, A., E-mail: allina.nadzri@anu.edu.au [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Schauries, D.; Mota-Santiago, P.; Muradoglu, S. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Trautmann, C. [GSI Helmholtz Centre for Heavy Ion Research, Planckstrasse 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, 64287 Darmstadt (Germany); Gleadow, A.J.W. [School of Earth Science, University of Melbourne, Melbourne, VIC 3010 (Australia); Hawley, A. [Australian Synchrotron, 800 Blackburn Road, Clayton, VIC 3168 (Australia); Kluth, P. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia)

    2016-07-15

    Natural apatite samples with different F/Cl content from a variety of geological locations (Durango, Mexico; Mud Tank, Australia; and Snarum, Norway) were irradiated with swift heavy ions to simulate fission tracks. The annealing kinetics of the resulting ion tracks was investigated using synchrotron-based small-angle X-ray scattering (SAXS) combined with ex situ annealing. The activation energies for track recrystallization were extracted and consistent with previous studies using track-etching, tracks in the chlorine-rich Snarum apatite are more resistant to annealing than in the other compositions.

  20. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    Science.gov (United States)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  1. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    International Nuclear Information System (INIS)

    Treiber, O; Wanninger, F; Fuehr, H; Panzer, W; Regulla, D; Winkler, G

    2003-01-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography

  2. Simulations of adaptive temperature control with self-focused hyperthermia system for tumor treatment.

    Science.gov (United States)

    Hu, Jiwen; Ding, Yajun; Qian, Shengyou; Tang, Xiangde

    2013-01-01

    The control problem in ultrasound therapy is to destroy the tumor tissue while not harming the intervening healthy tissue with a desired temperature elevation. The objective of this research is to present a robust and feasible method to control the temperature distribution and the temperature elevation in treatment region within the prescribed time, which can improve the curative effect and decrease the treatment time for heating large tumor (≥2.0cm in diameter). An adaptive self-tuning-regulator (STR) controller has been introduced into this control method by adding a time factor with a recursive algorithm, and the speed of sound and absorption coefficient of the medium is considered as a function of temperature during heating. The presented control method is tested for a self-focused concave spherical transducer (0.5MHz, 9cm aperture, 8.0cm focal length) through numerical simulations with three control temperatures of 43°C, 50°C and 55°C. The results suggest that this control system has adaptive ability for variable parameters and has a rapid response to the temperature and acoustic power output in the prescribed time for the hyperthermia interest. There is no overshoot during temperature elevation and no oscillation after reaching the desired temperatures. It is found that the same results can be obtained for different frequencies and temperature elevations. This method can obtain an ellipsoid-shaped ablation region, which is meaningful for the treatment of large tumor. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Systematic testing of flood adaptation options in urban areas through simulations

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2016-04-01

    While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into

  4. The Basic Immune Simulator: An agent-based model to study the interactions between innate and adaptive immunity

    Directory of Open Access Journals (Sweden)

    Orosz Charles G

    2007-09-01

    Full Text Available Abstract Background We introduce the Basic Immune Simulator (BIS, an agent-based model created to study the interactions between the cells of the innate and adaptive immune system. Innate immunity, the initial host response to a pathogen, generally precedes adaptive immunity, which generates immune memory for an antigen. The BIS simulates basic cell types, mediators and antibodies, and consists of three virtual spaces representing parenchymal tissue, secondary lymphoid tissue and the lymphatic/humoral circulation. The BIS includes a Graphical User Interface (GUI to facilitate its use as an educational and research tool. Results The BIS was used to qualitatively examine the innate and adaptive interactions of the immune response to a viral infection. Calibration was accomplished via a parameter sweep of initial agent population size, and comparison of simulation patterns to those reported in the basic science literature. The BIS demonstrated that the degree of the initial innate response was a crucial determinant for an appropriate adaptive response. Deficiency or excess in innate immunity resulted in excessive proliferation of adaptive immune cells. Deficiency in any of the immune system components increased the probability of failure to clear the simulated viral infection. Conclusion The behavior of the BIS matches both normal and pathological behavior patterns in a generic viral infection scenario. Thus, the BIS effectively translates mechanistic cellular and molecular knowledge regarding the innate and adaptive immune response and reproduces the immune system's complex behavioral patterns. The BIS can be used both as an educational tool to demonstrate the emergence of these patterns and as a research tool to systematically identify potential targets for more effective treatment strategies for diseases processes including hypersensitivity reactions (allergies, asthma, autoimmunity and cancer. We believe that the BIS can be a useful addition to

  5. Influence of alloying and secondary annealing on anneal hardening ...

    Indian Academy of Sciences (India)

    Unknown

    Influence of alloying and secondary annealing on anneal hardening effect at sintered copper alloys. SVETLANA NESTOROVIC. Technical Faculty Bor, University of Belgrade, Bor, Yugoslavia. MS received 11 February 2004; revised 29 October 2004. Abstract. This paper reports results of investigation carried out on sintered ...

  6. Evaluating adaptation options for urban flooding based on new high-end emission scenario regional climate model simulations

    DEFF Research Database (Denmark)

    Arnbjerg-Nielsen, Karsten; Leonardsen, L.; Madsen, Henrik

    2015-01-01

    Climate change adaptation studies on urban flooding are often based on a model chain approach from climate forcing scenarios to analysis of adaptation measures. Previous analyses of climate change impacts in Copenhagen, Denmark, were supplemented by 2 high-end scenario simulations. These include...... a regional climate model projection forced to a global temperature increase of 6 degrees C in 2100 as well as a projection based on a high radiative forcing scenario (RCP8.5). With these scenarios, projected impacts of extreme precipitation increase significantly. For extreme sea surges, the impacts do...... by almost 4 and 8 times the current EAD for the RCP8.5 and 6 degrees C scenario, respectively. For both hazards, business-as-usual is not a possible scenario, since even in the absence of policy-driven changes, significant autonomous adaptation is likely to occur. Copenhagen has developed an adaptation plan...

  7. Topical problems of crackability in weld annealing of low-alloyed pressure vessel steels

    International Nuclear Information System (INIS)

    Holy, M.

    1977-01-01

    The following method was developed for determining annealing crackability: A sharp notch was made in the middle of the bodies of rods imitated in a welding simulator. Chucking heads were modified such as to permit chucking a rod in an austenitic block by securing the nut. Prestress was controlled by button-headed screw adapters. The blocks were made of 4 types of austenitic steels with graded thermal expansivity coefficients, all higher than that of the tested low-alloyed steel rod. The blocks with rods were placed in a furnace and heated at a rate of 100 degC/h. As a result of the larger austenite block diameter the rod began to be stretched and at some temperature of more than 500 degC it was pulled apart. The risk of annealing crackability of welded joints may be reduced by the choice of material and melt and by the technology of welding, mainly by the choice of a suitable addition material in whose weld metal the plastic deformation preferably takes place in annealing. (J.P.)

  8. DOE's annealing prototype demonstration projects

    International Nuclear Information System (INIS)

    Warren, J.; Nakos, J.; Rochau, G.

    1997-01-01

    One of the challenges U.S. utilities face in addressing technical issues associated with the aging of nuclear power plants is the long-term effect of plant operation on reactor pressure vessels (RPVs). As a nuclear plant operates, its RPV is exposed to neutrons. For certain plants, this neutron exposure can cause embrittlement of some of the RPV welds which can shorten the useful life of the RPV. This RPV embrittlement issue has the potential to affect the continued operation of a number of operating U.S. pressurized water reactor (PWR) plants. However, RPV material properties affected by long-term irradiation are recoverable through a thermal annealing treatment of the RPV. Although a dozen Russian-designed RPVs and several U.S. military vessels have been successfully annealed, U.S. utilities have stated that a successful annealing demonstration of a U.S. RPV is a prerequisite for annealing a licensed U.S. nuclear power plant. In May 1995, the Department of Energy's Sandia National Laboratories awarded two cost-shared contracts to evaluate the feasibility of annealing U.S. licensed plants by conducting an anneal of an installed RPV using two different heating technologies. The contracts were awarded to the American Society of Mechanical Engineers (ASME) Center for Research and Technology Development (CRTD) and MPR Associates (MPR). The ASME team completed its annealing prototype demonstration in July 1996, using an indirect gas furnace at the uncompleted Public Service of Indiana's Marble Hill nuclear power plant. The MPR team's annealing prototype demonstration was scheduled to be completed in early 1997, using a direct heat electrical furnace at the uncompleted Consumers Power Company's nuclear power plant at Midland, Michigan. This paper describes the Department's annealing prototype demonstration goals and objectives; the tasks, deliverables, and results to date for each annealing prototype demonstration; and the remaining annealing technology challenges

  9. Assessment of urban pluvial flood risk and efficiency of adaptation options through simulations - A new generation of urban planning tools

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2017-07-01

    We present a new framework for flexible testing of flood risk adaptation strategies in a variety of urban development and climate scenarios. This framework couples the 1D-2D hydrodynamic simulation package MIKE FLOOD with the agent-based urban development model DAnCE4Water and provides the possibility to systematically test various flood risk adaptation measures ranging from large infrastructure changes over decentralised water management to urban planning policies. We have tested the framework in a case study in Melbourne, Australia considering 9 scenarios for urban development and climate and 32 potential combinations of flood adaptation measures. We found that the performance of adaptation measures strongly depended on the considered climate and urban development scenario and the other implementation measures implemented, suggesting that adaptive strategies are preferable over one-off investments. Urban planning policies proved to be an efficient means for the reduction of flood risk, while implementing property buyback and pipe increases in a guideline-oriented manner was too costly. Random variations in location and time point of urban development could have significant impact on flood risk and would in some cases outweigh the benefits of less efficient adaptation strategies. The results of our setup can serve as an input for robust decision making frameworks and thus support the identification of flood risk adaptation measures that are economically efficient and robust to variations of climate and urban layout.

  10. Grand Canonical adaptive resolution simulation for molecules with electrons: A theoretical framework based on physical consistency

    Science.gov (United States)

    Delle Site, Luigi

    2018-01-01

    A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.

  11. Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors.

    Science.gov (United States)

    Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J

    2015-02-20

    Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.

    Science.gov (United States)

    Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli

    2017-03-01

    In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.

  13. Direct calculation of 1-octanol-water partition coefficients from adaptive biasing force molecular dynamics simulations.

    Science.gov (United States)

    Bhatnagar, Navendu; Kamath, Ganesh; Chelst, Issac; Potoff, Jeffrey J

    2012-07-07

    The 1-octanol-water partition coefficient log K(ow) of a solute is a key parameter used in the prediction of a wide variety of complex phenomena such as drug availability and bioaccumulation potential of trace contaminants. In this work, adaptive biasing force molecular dynamics simulations are used to determine absolute free energies of hydration, solvation, and 1-octanol-water partition coefficients for n-alkanes from methane to octane. Two approaches are evaluated; the direct transfer of the solute from 1-octanol to water phase, and separate transfers of the solute from the water or 1-octanol phase to vacuum, with both methods yielding statistically indistinguishable results. Calculations performed with the TIP4P and SPC∕E water models and the TraPPE united-atom force field for n-alkanes show that the choice of water model has a negligible effect on predicted free energies of transfer and partition coefficients for n-alkanes. A comparison of calculations using wet and dry octanol phases shows that the predictions for log K(ow) using wet octanol are 0.2-0.4 log units lower than for dry octanol, although this is within the statistical uncertainty of the calculation.

  14. Survival and Adaptation of the Thermophilic Species Geobacillus thermantarcticus in Simulated Spatial Conditions

    Science.gov (United States)

    Di Donato, Paola; Romano, Ida; Mastascusa, Vincenza; Poli, Annarita; Orlando, Pierangelo; Pugliese, Mariagabriella; Nicolaus, Barbara

    2018-03-01

    Astrobiology studies the origin and evolution of life on Earth and in the universe. According to the panspermia theory, life on Earth could have emerged from bacterial species transported by meteorites, that were able to adapt and proliferate on our planet. Therefore, the study of extremophiles, i.e. bacterial species able to live in extreme terrestrial environments, can be relevant to Astrobiology studies. In this work we described the ability of the thermophilic species Geobacillus thermantarcticus to survive after exposition to simulated spatial conditions including temperature's variation, desiccation, X-rays and UVC irradiation. The response to the exposition to the space conditions was assessed at a molecular level by studying the changes in the morphology, the lipid and protein patterns, the nucleic acids. G. thermantarcticus survived to the exposition to all the stressing conditions examined, since it was able to restart cellular growth in comparable levels to control experiments carried out in the optimal growth conditions. Survival was elicited by changing proteins and lipids distribution, and by protecting the DNA's integrity.

  15. Simulated human eye retina adaptive optics imaging system based on a liquid crystal on silicon device

    International Nuclear Information System (INIS)

    Jiang Baoguang; Cao Zhaoliang; Mu Quanquan; Hu Lifa; Li Chao; Xuan Li

    2008-01-01

    In order to obtain a clear image of the retina of model eye, an adaptive optics system used to correct the wave-front error is introduced in this paper. The spatial light modulator that we use here is a liquid crystal on a silicon device instead of a conversional deformable mirror. A paper with carbon granule is used to simulate the retina of human eye. The pupil size of the model eye is adjustable (3-7 mm). A Shack–Hartman wave-front sensor is used to detect the wave-front aberration. With this construction, a value of peak-to-valley is achieved to be 0.086 λ, where λ is wavelength. The modulation transfer functions before and after corrections are compared. And the resolution of this system after correction (691p/m) is very close to the dirraction limit resolution. The carbon granule on the white paper which has a size of 4.7 μm is seen clearly. The size of the retina cell is between 4 and 10 mu;m. So this system has an ability to image the human eye's retina. (classical areas of phenomenology)

  16. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    International Nuclear Information System (INIS)

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-01-01

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  17. Adapting HYDRUS-1D to Simulate Overland Flow and Reactive Transport During Sheet Flow Deviations

    Science.gov (United States)

    Liang, J.; Bradford, S. A.; Simunek, J.; Hartmann, A.

    2017-12-01

    The HYDRUS-1D code is a popular numerical model for solving the Richards equation for variably-saturated water flow and solute transport in porous media. This code was adapted to solve rather than the Richards equation for subsurface flow the diffusion wave equation for overland flow at the soil surface. The numerical results obtained by the new model produced an excellent agreement with the analytical solution of the kinematic wave equation. Model tests demonstrated its applicability to simulate the transport and fate of many different solutes, such as non-adsorbing tracers, nutrients, pesticides, and microbes. However, the diffusion wave or kinematic wave equations describe surface runoff as sheet flow with a uniform depth and velocity across the slope. In reality, overland water flow and transport processes are rarely uniform. Local soil topography, vegetation, and spatial soil heterogeneity control directions and magnitudes of water fluxes, and strongly influence runoff characteristics. There is increasing evidence that variations in soil surface characteristics influence the distribution of overland flow and transport of pollutants. These spatially varying surface characteristics are likely to generate non-equilibrium flow and transport processes. HYDRUS-1D includes a hierarchical series of models of increasing complexity to account for both physical equilibrium and non-equilibrium, e.g., dual-porosity and dual-permeability models, up to a dual-permeability model with immobile water. The same conceptualization as used for the subsurface was implemented to simulate non-equilibrium overland flow and transport at the soil surface. The developed model improves our ability to describe non-equilibrium overland flow and transport processes and to improves our understanding of factors that cause this behavior. The HYDRUS-1D overland flow and transport model was additionally also extended to simulate soil erosion. The HYDRUS-1D Soil Erosion Model has been verified by

  18. Analysis, Adaptive Control and Adaptive Synchronization of a Nine-Term Novel 3-D Chaotic System with Four Quadratic Nonlinearities and its Circuit Simulation

    Directory of Open Access Journals (Sweden)

    S. Vaidyanathan

    2014-11-01

    Full Text Available This research work describes a nine-term novel 3-D chaotic system with four quadratic nonlinearities and details its qualitative properties. The phase portraits of the 3-D novel chaotic system simulated using MATLAB, depict the strange chaotic attractor of the system. For the parameter values chosen in this work, the Lyapunov exponents of the novel chaotic system are obtained as L1 = 6.8548, L2 = 0 and L3 = −32.8779. Also, the Kaplan-Yorke dimension of the novel chaotic system is obtained as DKY = 2.2085. Next, an adaptive controller is design to achieve global stabilization of the 3-D novel chaotic system with unknown system parameters. Moreover, an adaptive controller is designed to achieve global chaos synchronization of two identical novel chaotic systems with unknown system parameters. Finally, an electronic circuit realization of the novel chaotic system is presented using SPICE to confirm the feasibility of the theoretical model.

  19. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    Science.gov (United States)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  20. Radiation annealing in cuprous oxide

    DEFF Research Database (Denmark)

    Vajda, P.

    1966-01-01

    Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square...

  1. Extrapolation of zircon fission-track annealing models

    International Nuclear Information System (INIS)

    Palissari, R.; Guedes, S.; Curvo, E.A.C.; Moreira, P.A.F.P.; Tello, C.A.; Hadler, J.C.

    2013-01-01

    One of the purposes of this study is to give further constraints on the temperature range of the zircon partial annealing zone over a geological time scale using data from borehole zircon samples, which have experienced stable temperatures for ∼1 Ma. In this way, the extrapolation problem is explicitly addressed by fitting the zircon annealing models with geological timescale data. Several empirical model formulations have been proposed to perform these calibrations and have been compared in this work. The basic form proposed for annealing models is the Arrhenius-type model. There are other annealing models, that are based on the same general formulation. These empirical model equations have been preferred due to the great number of phenomena from track formation to chemical etching that are not well understood. However, there are two other models, which try to establish a direct correlation between their parameters and the related phenomena. To compare the response of the different annealing models, thermal indexes, such as closure temperature, total annealing temperature and the partial annealing zone, have been calculated and compared with field evidence. After comparing the different models, it was concluded that the fanning curvilinear models yield the best agreement between predicted index temperatures and field evidence. - Highlights: ► Geological data were used along with lab data for improving model extrapolation. ► Index temperatures were simulated for testing model extrapolation. ► Curvilinear Arrhenius models produced better geological temperature predictions

  2. Optimization of a water resource system expansion using the Genetic Algorithm and Simulated Annealing methods; Optimizacion de la expansion de un sistema de recursos hidricos utilizados las metodologias del algoritmo genetico y el recocido simulado

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Camacho, Enrique; Andreu Alvarez, Joaquin [Universidad Politecnica de Valencia (Spain)

    2001-06-01

    Two numerical procedures, based on the Genetic Algorithm (GA) and the Simulated Annealing (SA), are developed to solve the problem of the expansion of capacity of a water resource system. The problem was divided into two subproblems: capital availability and operation policy. Both are optimisation-simulation models, the first one is solved by means of the GA and SA, in each case, while the second one is solved using the Out-of-kilter algorithm (OKA), in both models. The objective function considers the usual benefits and costs in this kind of systems, such as irrigation and hydropower benefits, costs of dam construction and system maintenance. The strength and weakness of both models are evaluated by comparing their results with those obtained with the branch and bound technique, which was classically used to solve this kind of problems. [Spanish] Un par de metodos numericos fundamentados en dos tecnicas de busqueda globales. Algoritmos Genetico (AG) y Recocido Simulado (RS), son desarrollados para resolver el problema de expansion de capacidad de un sistema de recursos hidricos. La estrategia ha sido dividir al problema en dos subproblemas: el de disponibilidad de capital y el de la politica de operacion. Ambos modelos son de optimizacion-simulacion, el primero se realiza mediante los algoritmos del RS y el AG en cada caso, en tanto que el segundo lleva a cabo a traves del algoritmo del Out-of-kilter (AOK) en los dos modelos. La funcion objetivo con que se trabaja considera los beneficios y costos mas comunes en este tipo de sistema, tales como beneficios por riego, por hidroelectricidad y costos de construccion de los embalses y mantenimiento del sistema. La potencia y debilidades delos dos modelos se evaluan mediante la comparacion con los resultados obtenidos a traves de una de las tecnicas mas usadas en este tipo de problemas: la de ramificacion y acotacion.

  3. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate. (geophysics, astronomy, and astrophysics)

  4. Adaptive coupling between damage mechanics and peridynamics: a route for objective simulation of material degradation up to complete failure

    KAUST Repository

    Han, Fei

    2016-05-17

    The objective (mesh-independent) simulation of evolving discontinuities, such as cracks, remains a challenge. Current techniques are highly complex or involve intractable computational costs, making simulations up to complete failure difficult. We propose a framework as a new route toward solving this problem that adaptively couples local-continuum damage mechanics with peridynamics to objectively simulate all the steps that lead to material failure: damage nucleation, crack formation and propagation. Local-continuum damage mechanics successfully describes the degradation related to dispersed microdefects before the formation of a macrocrack. However, when damage localizes, it suffers spurious mesh dependency, making the simulation of macrocracks challenging. On the other hand, the peridynamic theory is promising for the simulation of fractures, as it naturally allows discontinuities in the displacement field. Here, we present a hybrid local-continuum damage/peridynamic model. Local-continuum damage mechanics is used to describe “volume” damage before localization. Once localization is detected at a point, the remaining part of the energy is dissipated through an adaptive peridynamic model capable of the transition to a “surface” degradation, typically a crack. We believe that this framework, which actually mimics the real physical process of crack formation, is the first bridge between continuum damage theories and peridynamics. Two-dimensional numerical examples are used to illustrate that an objective simulation of material failure can be achieved by this method.

  5. Adaptive coupling between damage mechanics and peridynamics: a route for objective simulation of material degradation up to complete failure

    KAUST Repository

    Han, Fei; Lubineau, Gilles; Azdoud, Yan

    2016-01-01

    The objective (mesh-independent) simulation of evolving discontinuities, such as cracks, remains a challenge. Current techniques are highly complex or involve intractable computational costs, making simulations up to complete failure difficult. We propose a framework as a new route toward solving this problem that adaptively couples local-continuum damage mechanics with peridynamics to objectively simulate all the steps that lead to material failure: damage nucleation, crack formation and propagation. Local-continuum damage mechanics successfully describes the degradation related to dispersed microdefects before the formation of a macrocrack. However, when damage localizes, it suffers spurious mesh dependency, making the simulation of macrocracks challenging. On the other hand, the peridynamic theory is promising for the simulation of fractures, as it naturally allows discontinuities in the displacement field. Here, we present a hybrid local-continuum damage/peridynamic model. Local-continuum damage mechanics is used to describe “volume” damage before localization. Once localization is detected at a point, the remaining part of the energy is dissipated through an adaptive peridynamic model capable of the transition to a “surface” degradation, typically a crack. We believe that this framework, which actually mimics the real physical process of crack formation, is the first bridge between continuum damage theories and peridynamics. Two-dimensional numerical examples are used to illustrate that an objective simulation of material failure can be achieved by this method.

  6. Using simulated annealing algorithm to optimize the parameters of Biome-BGC model%利用模拟退火算法优化Biome-BGC模型参数

    Institute of Scientific and Technical Information of China (English)

    张廷龙; 孙睿; 胡波; 冯丽超

    2011-01-01

    生态过程模型建立在明确的机理之上,能够较好地模拟陆地生态系统的行为和特征,但模型众多的参数,成为模型具体应用的瓶颈.本文以Biome-BGC模型为例,采用模拟退火算法,对其生理、生态参数进行优化.在优化过程中,先对待优化参数进行了选择,然后采取逐步优化的方法进行优化.结果表明,使用优化后的参数,模型模拟结果与实际观测更为接近,参数优化能有效地降低模型模拟的不确定性.文中参数优化的过程和方法,可为生态模型的参数识别和优化提供一种实例和思路,有助于生态模型应用区域的扩展.%Ecological process model based on defined mechanism can well simulate the dynamic behaviors and features of terrestrial ecosystem, but could become a bottleneck in application because of numerous parameters needed to be confirmed. In this paper, simulated annealing algorithm was used to optimize the physiological and ecological parameters of Biome-BGC model. The first step was to choose some of these parameters to optimize, and then, gradually optimized these parameters. By using the optimized parameters, the model simulation results were much more close to the observed data, and the parameter optimization could effectively reduce the uncertainty of model simulation. The parameter optimization method used in this paper could provide a case and an idea for the parameter identification and optimization of ecological process models,and also, help to expand the application area of the models.

  7. Adaptive local refinement and multi-level methods for simulating multiphasic flows

    International Nuclear Information System (INIS)

    Minjeaud, Sebastian

    2010-01-01

    This thesis describes some numerical and mathematical aspects of incompressible multiphase flows simulations with a diffuse interface Cahn-Hilliard / Navier-Stokes model (interfaces have a small but a positive thickness). The space discretization is performed thanks to a Galerkin formulation and the finite elements method. The presence of different scales in the system (interfaces have a very small thickness compared to the characteristic lengths of the domain) suggests the use of a local adaptive refinement method. The algorithm that is introduced allows to implicitly handle the non-conformities of the generated meshes to produce conformal finite elements approximation spaces. It consists in refining basis functions instead of cells. The refinement of a basis function is made possible by the conceptual existence of a nested sequence of uniformly refined grids from which 'parent-child' relationships are deduced, linking the basis functions of two consecutive refinement levels. Moreover, it is shown how this method can be exploited to build multigrid pre-conditioners. From a composite finite elements approximation space, it is indeed possible to rebuild, by 'coarsening', a sequence of auxiliary nested spaces which allows to enter in the abstract multigrid framework. Concerning the time discretization, it begins with the study of the Cahn-Hilliard system. A semi-implicit scheme is proposed to remedy to convergence failures of the Newton method used to solve this (non linear) system. It guarantees the decrease of the discrete free energy ensuring the stability of the scheme. The existence and convergence of discrete solutions towards the weak solution of the system are shown. The study continues with providing an unconditionally stable time discretization of the complete Cahn-Hilliard / Navier-Stokes model. An important point is that this discretization does not strongly couple the Cahn-Hilliard and Navier-Stokes systems allowing to independently solve the two systems

  8. A Timed Colored Petri Net Simulation-Based Self-Adaptive Collaboration Method for Production-Logistics Systems

    OpenAIRE

    Zhengang Guo; Yingfeng Zhang; Xibin Zhao; Xiaoyu Song

    2017-01-01

    Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive colla...

  9. Single Wake Meandering, Advection and Expansion - An analysis using an adapted Pulsed Lidar and CFD LES-ACL simulations

    DEFF Research Database (Denmark)

    In this paper, single wake characteristics have been studied both experimentally and numerically. Firstly, the wake is studied experimentally using full-scale measurements from an adapted focused pulsed lidar system, which potentially gives more insight into the wake dynamics as compared to class...... using the EllipSys3D flow solver using Large Eddy Simulation (LES) and Actuator Line Technique (ACL) to model the rotor. Discrepancies due to the uncertainties on the wake advection velocity are observed and discussed....

  10. Single Wake Meandering, Advection and Expansion - An analysis using an adapted Pulsed Lidar and CFD LES-ACL simulations

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels

    2013-01-01

    In this paper, single wake characteristics have been studied both experimentally and numerically. Firstly, the wake is studied experimentally using full-scale measurements from an adapted focused pulsed lidar system, which potentially gives more insight into the wake dynamics as compared to class...... using the EllipSys3D flow solver using Large Eddy Simulation (LES) and Actuator Line Technique (ACL) to model the rotor. Discrepancies due to the uncertainties on the wake advection velocity are observed and discussed....

  11. A comparative study of cold- and warm-adapted Endonucleases A using sequence analyses and molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Davide Michetti

    Full Text Available The psychrophilic and mesophilic endonucleases A (EndA from Aliivibrio salmonicida (VsEndA and Vibrio cholera (VcEndA have been studied experimentally in terms of the biophysical properties related to thermal adaptation. The analyses of their static X-ray structures was no sufficient to rationalize the determinants of their adaptive traits at the molecular level. Thus, we used Molecular Dynamics (MD simulations to compare the two proteins and unveil their structural and dynamical differences. Our simulations did not show a substantial increase in flexibility in the cold-adapted variant on the nanosecond time scale. The only exception is a more rigid C-terminal region in VcEndA, which is ascribable to a cluster of electrostatic interactions and hydrogen bonds, as also supported by MD simulations of the VsEndA mutant variant where the cluster of interactions was introduced. Moreover, we identified three additional amino acidic substitutions through multiple sequence alignment and the analyses of MD-based protein structure networks. In particular, T120V occurs in the proximity of the catalytic residue H80 and alters the interaction with the residue Y43, which belongs to the second coordination sphere of the Mg2+ ion. This makes T120V an amenable candidate for future experimental mutagenesis.

  12. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    Science.gov (United States)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  13. Adaptation to hydrological extremes through insurance: a financial fund simulation model under changing scenarios

    Science.gov (United States)

    Guzman, Diego; Mohor, Guilherme; Câmara, Clarissa; Mendiondo, Eduardo

    2017-04-01

    Researches from around the world relate global environmental changes with the increase of vulnerability to extreme events, such as heavy and scarce precipitations - floods and droughts. Hydrological disasters have caused increasing losses in recent years. Thus, risk transfer mechanisms, such as insurance, are being implemented to mitigate impacts, finance the recovery of the affected population, and promote the reduction of hydrological risks. However, among the main problems in implementing these strategies, there are: First, the partial knowledge of natural and anthropogenic climate change in terms of intensity and frequency; Second, the efficient risk reduction policies require accurate risk assessment, with careful consideration of costs; Third, the uncertainty associated with numerical models and input data used. The objective of this document is to introduce and discuss the feasibility of the application of Hydrological Risk Transfer Models (HRTMs) as a strategy of adaptation to global climate change. The article shows the development of a methodology for the collective and multi-sectoral vulnerability management, facing the hydrological risk in the long term, under an insurance funds simulator. The methodology estimates the optimized premium as a function of willingness to pay (WTP) and the potential direct loss derived from hydrological risk. The proposed methodology structures the watershed insurance scheme in three analysis modules. First, the hazard module, which characterizes the hydrologic threat from the recorded series input or modelled series under IPCC / RCM's generated scenarios. Second, the vulnerability module calculates the potential economic loss for each sector1 evaluated as a function of the return period "TR". Finally, the finance module determines the value of the optimal aggregate premium by evaluating equiprobable scenarios of water vulnerability; taking into account variables such as the maximum limit of coverage, deductible

  14. Adaptive Mesh Refinement and High Order Geometrical Moment Method for the Simulation of Polydisperse Evaporating Sprays

    Directory of Open Access Journals (Sweden)

    Essadki Mohamed

    2016-09-01

    Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively

  15. Annealing relaxation of ultrasmall gold nanostructures

    Science.gov (United States)

    Chaban, Vitaly

    2015-01-01

    Except serving as an excellent gift on proper occasions, gold finds applications in life sciences, particularly in diagnostics and therapeutics. These applications were made possible by gold nanoparticles, which differ drastically from macroscopic gold. Versatile surface chemistry of gold nanoparticles allows coating with small molecules, polymers, biological recognition molecules. Theoretical investigation of nanoscale gold is not trivial, because of numerous metastable states in these systems. Unlike elsewhere, this work obtains equilibrium structures using annealing simulations within the recently introduced PM7-MD method. Geometries of the ultrasmall gold nanostructures with chalcogen coverage are described at finite temperature, for the first time.

  16. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available : silica (quartz, chalcedony, and opal)→ alunite → kaolinite → illite → smectite → chlorite. Associated with this mineral alteration are high sulphidation gold deposits and low sulphidation base metal deposits. Gold min- eralization is located... of vuggy (porous) quartz, opal and gray and black chalcedony veins. Vuggy quartz (porous quartz) is formed from extreme leaching of the host rock. It hosts high sulphidation gold mineralization and is evidence for a hypogene event. Alteration...

  17. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  18. A simulation framework for asset management in climate-change adaptation of transportation infrastructure

    NARCIS (Netherlands)

    Bhamidipati, S.K.

    2014-01-01

    An asset management framework, in an agent-based model with multiple assets, is presented as a tool that can assist in developing long-term climate change adaptation strategies for transportation infrastructure.

  19. (YIP 2011) Unsteady Output-based Adaptive Simulation of Separated and Transitional Flows

    Science.gov (United States)

    2015-03-19

    Investigator Aerospace Eng. U. Michigan Marco Ceze Ph.D. student/postdoctoral associate Aerospace Eng. U. Michigan Steven Kast Ph.D. student Aerospace...13] S. M. Kast , M. A. Ceze, and K. J. Fidkowski. Output-adaptive solution strategies for unsteady aerodynamics on deformable domains. Seventh...International Conference on Computational Fluid Dynamics ICCFD7-3802, 2012. [14] S. M. Kast and K. J. Fidkowski. Output-based mesh adaptation for high order

  20. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  1. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  2. Covert rapid action-memory simulation (CRAMS): a hypothesis of hippocampal-prefrontal interactions for adaptive behavior.

    Science.gov (United States)

    Wang, Jane X; Cohen, Neal J; Voss, Joel L

    2015-01-01

    Effective choices generally require memory, yet little is known regarding the cognitive or neural mechanisms that allow memory to influence choices. We outline a new framework proposing that covert memory processing of hippocampus interacts with action-generation processing of prefrontal cortex in order to arrive at optimal, memory-guided choices. Covert, rapid action-memory simulation (CRAMS) is proposed here as a framework for understanding cognitive and/or behavioral choices, whereby prefrontal-hippocampal interactions quickly provide multiple simulations of potential outcomes used to evaluate the set of possible choices. We hypothesize that this CRAMS process is automatic, obligatory, and covert, meaning that many cycles of action-memory simulation occur in response to choice conflict without an individual's necessary intention and generally without awareness of the simulations, leading to adaptive behavior with little perceived effort. CRAMS is thus distinct from influential proposals that adaptive memory-based behavior in humans requires consciously experienced memory-based construction of possible future scenarios and deliberate decisions among possible future constructions. CRAMS provides an account of why hippocampus has been shown to make critical contributions to the short-term control of behavior, and it motivates several new experimental approaches and hypotheses that could be used to better understand the ubiquitous role of prefrontal-hippocampal interactions in situations that require adaptively using memory to guide choices. Importantly, this framework provides a perspective that allows for testing decision-making mechanisms in a manner that translates well across human and nonhuman animal model systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The use of adaptation to reduce simulator sickness in driving assessment and research.

    Science.gov (United States)

    Domeyer, Joshua E; Cassavaugh, Nicholas D; Backs, Richard W

    2013-04-01

    The technical advancement of driving simulators has decreased their cost and increased both their accuracy and fidelity. This makes them a useful tool for examining driving behavior in risky or unique situations. With the approaching increase of older licensed drivers due to aging of the baby boomers, driving simulators will be important for conducting driving research and evaluations for older adults. With these simulator technologies, some people may experience significant effects of a unique form of motion sickness, known as simulator sickness. These effects may be more pronounced in older adults. The present study examined the feasibility of an intervention to attenuate symptoms of simulator sickness in drivers participating in a study of a driving evaluation protocol. Prior to beginning the experiment, the experimental groups did not differ in subjective simulator sickness scores as indicated by Revised Simulator Sickness Questionnaire scores (all p>0.5). Participants who experienced a two-day delay between an initial acclimation to the driving simulator and the driving session experienced fewer simulator sickness symptoms as indicated by RSSQ total severity scores than participants who did not receive a two-day delay (F(1,88)=4.54, p=.036, partial η(2)=.049). These findings have implications for improving client well-being and potentially increasing acceptance of driving simulation for driving evaluations and for driving safety research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Adaptation of Listeria monocytogenes in a simulated cheese medium: effects on virulence using the Galleria mellonella infection model.

    Science.gov (United States)

    Schrama, D; Helliwell, N; Neto, L; Faleiro, M L

    2013-06-01

    The aim of this study was to evaluate the effect of the acid and salt adaptation in a cheese-based medium on the virulence potential of Listeria monocytogenes strains isolated from cheese and dairy processing environment using the Galleria mellonella model. Four L. monocytogenes strains were exposed to a cheese-based medium in conditions of induction of an acid tolerance response and osmotolerance response (pH 5·5 and 3·5% w/v NaCl) and injected in G. mellonella insects. The survival of insects and the L. monocytogenes growth kinetics in insects were evaluated. The gene expression of hly, actA and inlA genes was determined by real-time PCR. The adapted cells of two dairy strains showed reduced insect mortality (P 0·05) was found between adapted and nonadapted cells. The gene expression results evidenced an overexpression of virulence genes in cheese-based medium, but not in simulated insect-induced conditions. Our results suggest that adaptation to low pH and salt in a cheese-based medium can affect the virulence of L. monocytogenes, but this effect is strain dependent. In this study, the impact of adaptation to low pH and salt in a cheese-based medium on L. monocytogenes virulence was tested using the Wax Moth G. mellonella model. This model allowed the differentiation of the virulence potential between the L. monocytogenes strains. The effect of adaptation on virulence is strain dependent. The G. mellonella model revealed to be a prompt method to test food-related factors on L. monocytogenes virulence. © 2013 The Society for Applied Microbiology.

  5. Parameter-Adaptive Model-Following for In-Flight Simulation.

    Science.gov (United States)

    1987-12-01

    simulation architechture through the use of "smart" simulation cockpits for increased simulation capability and fidelity. Most recently, he has been...OF REPORT Approved for public release; U Lribution2b. DECLASSIFICATiON/DOWNGRADiNG SCHEDULE unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5...MONiTOR!NG ORGANIZATICN REPORT NUMBER(S, AFIT/GE/ENG/87D-74 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION

  6. A Hardware-Accelerated Fast Adaptive Vortex-Based Flow Simulation Software, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Applied Scientific Research has recently developed a Lagrangian vortex-boundary element method for the grid-free simulation of unsteady incompressible...

  7. Simulations research of the global predictive control with self-adaptive in the gas turbine of the nuclear power plant

    International Nuclear Information System (INIS)

    Su Jie; Xia Guoqing; Zhang Wei

    2007-01-01

    For further improving the dynamic control capabilities of the gas turbine of the nuclear power plant, this paper puts forward to apply the algorithm of global predictive control with self-adaptive in the rotate speed control of the gas turbine, including control structure and the design of controller in the base of expounding the math model of the gas turbine of the nuclear power plant. the simulation results show that the respond of the change of the gas turbine speed under the control algorithm of global predictive control with self-adaptive is ten second faster than that under the PID control algorithm, and the output value of the gas turbine speed under the PID control algorithm is 1%-2% higher than that under the control slgorithm of global predictive control with self-adaptive. It shows that the algorithm of global predictive control with self-adaptive can better control the output of the speed of the gas turbine of the nuclear power plant and get the better control effect. (authors)

  8. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  9. Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL

    2009-12-01

    To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are not the necessary requirements for insurgents to efficiently attain their objective.

  10. High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Solin, Pavel [Univ. of Reno, NV (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2014-03-09

    We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.

  11. High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering

    International Nuclear Information System (INIS)

    Solin, Pavel; Ragusa, Jean

    2014-01-01

    We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.

  12. Applying computer adaptive testing to optimize online assessment of suicidal behavior: a simulation study.

    NARCIS (Netherlands)

    de Beurs, D.P.; de Vries, A.L.M.; de Groot, M.H.; de Keijser, J.; Kerkhof, A.J.F.M.

    2014-01-01

    Background: The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce

  13. Potential of adaptive clinical trial designs in pharmacogenetic research, A simulation based on the IPASS trial

    NARCIS (Netherlands)

    Van Der Baan, Frederieke H.; Knol, Mirjam J.|info:eu-repo/dai/nl/304820350; Klungel, Olaf H.|info:eu-repo/dai/nl/181447649; Egberts, Toine C.G.|info:eu-repo/dai/nl/162850050; Grobbee, Diederick E.; Roes, Kit C.B.

    2011-01-01

    Background: An adaptive clinical trial design that allows population enrichment after interim analysis can be advantageous in pharmacogenetic research if previous evidence is not strong enough to exclude part of the patient population beforehand.With this design, underpowered studies or unnecessary

  14. A Pilot Study Assessing Performance and Visual Attention of Teenagers with ASD in a Novel Adaptive Driving Simulator.

    Science.gov (United States)

    Wade, Joshua; Weitlauf, Amy; Broderick, Neill; Swanson, Amy; Zhang, Lian; Bian, Dayi; Sarkar, Medha; Warren, Zachary; Sarkar, Nilanjan

    2017-11-01

    Individuals with Autism Spectrum Disorder (ASD), compared to typically-developed peers, may demonstrate behaviors that are counter to safe driving. The current work examines the use of a novel simulator in two separate studies. Study 1 demonstrates statistically significant performance differences between individuals with (N = 7) and without ASD (N = 7) with regards to the number of turning-related driving errors (p training (p attention of drivers and an adaptive driving intervention for individuals with ASD.

  15. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  16. Adaptive smart simulator for characterization and MPPT construction of PV array

    International Nuclear Information System (INIS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-01-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  17. Adaptive smart simulator for characterization and MPPT construction of PV array

    Science.gov (United States)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  18. Adaptive smart simulator for characterization and MPPT construction of PV array

    Energy Technology Data Exchange (ETDEWEB)

    Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah [Electromechanical engineering department, Electromechanical engineering laboratory, Badji Mokhtar University, B.P. 12, Annaba (Algeria); Dib, Djalel [Department of Electrical Engineering, University of Tebessa, Tebessa (Algeria)

    2016-07-25

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  19. Integration of adaptive process control with computational simulation for spin-forming

    International Nuclear Information System (INIS)

    Raboin, P. J. LLNL

    1998-01-01

    Improvements in spin-forming capabilities through upgrades to a metrology and machine control system and advances in numerical simulation techniques were studied in a two year project funded by Laboratory Directed Research and Development (LDRD) at Lawrence Livermore National Laboratory. Numerical analyses were benchmarked with spin-forming experiments and computational speeds increased sufficiently to now permit actual part forming simulations. Extensive modeling activities examined the simulation speeds and capabilities of several metal forming computer codes for modeling flat plate and cylindrical spin-forming geometries. Shape memory research created the first numerical model to describe this highly unusual deformation behavior in Uranium alloys. A spin-forming metrology assessment led to sensor and data acquisition improvements that will facilitate future process accuracy enhancements, such as a metrology frame. Finally, software improvements (SmartCAM) to the manufacturing process numerically integrate the part models to the spin-forming process and to computational simulations

  20. Quenching and annealing in the minority game

    Science.gov (United States)

    Burgos, E.; Ceva, Horacio; Perazzo, R. P. J.

    2001-05-01

    We study the bar attendance model (BAM) and a generalized version of the minority game (MG) in which a number of agents self organize to match an attendance that is fixed externally as a control parameter. We compare the probabilistic dynamics used in the MG with one that we introduce for the BAM that makes better use of the same available information. The relaxation dynamics of the MG leads the system to long lived, metastable (quenched) configurations in which adaptive evolution stops in spite of being far from equilibrium. On the contrary, the BAM relaxation dynamics avoids the MG glassy state, leading to an equilibrium configuration. Finally, we introduce in the MG model the concept of annealing by defining a new procedure with which one can gradually overcome the metastable MG states, bringing the system to an equilibrium that coincides with the one obtained with the BAM.