International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach
Directory of Open Access Journals (Sweden)
Sungwook Kim
2014-01-01
Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Adaptive MANET multipath routing algorithm based on the simulated annealing approach.
Kim, Sungwook
2014-01-01
Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Stochastic Global Optimization and Its Applications with Fuzzy Adaptive Simulated Annealing
Aguiar e Oliveira Junior, Hime; Petraglia, Antonio; Rembold Petraglia, Mariane; Augusta Soares Machado, Maria
2012-01-01
Stochastic global optimization is a very important subject, that has applications in virtually all areas of science and technology. Therefore there is nothing more opportune than writing a book about a successful and mature algorithm that turned out to be a good tool in solving difficult problems. Here we present some techniques for solving several problems by means of Fuzzy Adaptive Simulated Annealing (Fuzzy ASA), a fuzzy-controlled version of ASA, and by ASA itself. ASA is a sophisticated global optimization algorithm that is based upon ideas of the simulated annealing paradigm, coded in the C programming language and developed to statistically find the best global fit of a nonlinear constrained, non-convex cost function over a multi-dimensional space. By presenting detailed examples of its application we want to stimulate the reader’s intuition and make the use of Fuzzy ASA (or regular ASA) easier for everyone wishing to use these tools to solve problems. We kept formal mathematical requirements to a...
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.
Keystream Generator Based On Simulated Annealing
Directory of Open Access Journals (Sweden)
Ayad A. Abdulsalam
2011-01-01
Full Text Available Advances in the design of keystream generator using heuristic techniques are reported. A simulated annealing algorithm for generating random keystream with large complexity is presented. Simulated annealing technique is adapted to locate these requirements. The definitions for some cryptographic properties are generalized, providing a measure suitable for use as an objective function in a simulated annealing algorithm, seeking randomness that satisfy both correlation immunity and the large linear complexity. Results are presented demonstrating the effectiveness of the method.
Simulated annealing model of acupuncture
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Simulated annealing approach for solving economic load dispatch ...
African Journals Online (AJOL)
user
Abstract. This paper presents Simulated Annealing (SA) algorithm for optimization inspired by the process of annealing in ... Various classical optimization techniques were used to solve the ELD problem, for example: lambda iteration approach, ...... Research of fuzzy self-adaptive immune algorithm and its application.
Energy Technology Data Exchange (ETDEWEB)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
Simulated annealing with constant thermodynamic speed
International Nuclear Information System (INIS)
Salamon, P.; Ruppeiner, G.; Liao, L.; Pedersen, J.
1987-01-01
Arguments are presented to the effect that the optimal annealing schedule for simulated annealing proceeds with constant thermodynamic speed, i.e., with dT/dt = -(v T)/(ε-√C), where T is the temperature, ε- is the relaxation time, C ist the heat capacity, t is the time, and v is the thermodynamic speed. Experimental results consistent with this conjecture are presented from simulated annealing on graph partitioning problems. (orig.)
Cylinder packing by simulated annealing
Directory of Open Access Journals (Sweden)
M. Helena Correia
2000-12-01
Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.
Intelligent medical image processing by simulated annealing
International Nuclear Information System (INIS)
Ohyama, Nagaaki
1992-01-01
Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)
Finite-time thermodynamics and simulated annealing
International Nuclear Information System (INIS)
Andresen, B.
1989-01-01
When the general, global optimization technique simulated annealing was introduced by Kirkpatrick et al. (1983), this mathematical algorithm was based on an analogy to the statistical mechanical behavior of real physical systems like spin glasses, hence the name. In the intervening span of years the method has proven exceptionally useful for a great variety of extremely complicated problems, notably NP-problems like the travelling salesman, DNA sequencing, and graph partitioning. Only a few highly optimized heuristic algorithms (e.g. Lin, Kernighan 1973) have outperformed simulated annealing on their respective problems (Johnson et al. 1989). Simulated annealing in its current form relies only on the static quantity 'energy' to describe the system, whereas questions of rate, as in the temperature path (annealing schedule, see below), are left to intuition. We extent the connection to physical systems and take over further components from thermodynamics like ensemble, heat capacity, and relaxation time. Finally we refer to finite-time thermodynamics (Andresen, Salomon, Berry 1984) for a dynamical estimate of the optimal temperature path. (orig.)
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors
Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.
1990-01-01
Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.
Binary Sparse Phase Retrieval via Simulated Annealing
Directory of Open Access Journals (Sweden)
Wei Peng
2016-01-01
Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.
Simulated annealing for tensor network states
International Nuclear Information System (INIS)
Iblisdir, S
2014-01-01
Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)
MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING
Directory of Open Access Journals (Sweden)
Ladislav Rosocha
2015-07-01
Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.
A note on simulated annealing to computer laboratory scheduling ...
African Journals Online (AJOL)
Simulated Annealing algorithm is used in solving real life problem of Computer Laboratory scheduling in order to maximize the use of scarce and insufficient resources. KEY WORDS: Simulated Annealing (SA), Computer Laboratory Scheduling, Statistical Thermodynamics, Energy Function, and Heuristic etc. Global Jnl of ...
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Shi-hua Zhan
2016-01-01
Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Conventional treatment planning optimization using simulated annealing
International Nuclear Information System (INIS)
Morrill, S.M.; Langer, M.; Lane, R.G.
1995-01-01
Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Simulated annealing image reconstruction for positron emission tomography
International Nuclear Information System (INIS)
Sundermann, E.; Lemahieu, I.; Desmedt, P.
1994-01-01
In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors)
Simulated annealing CFAR threshold selection for South African ship detection in ASAR imagery
CSIR Research Space (South Africa)
Schwegmann, CP
2014-07-01
Full Text Available chosen threshold value. Typically, the threshold value is chosen as a single floating value for all positions creating a flat threshold plane. This study introduces a novel method of creating a threshold plane which is adapted using Simulated Annealing...
Phase annealing for the conditional simulation of spatial random fields
Hörning, S.; Bárdossy, A.
2018-03-01
Simulated annealing (SA) is a popular geostatistical simulation method as it provides great flexibility. In this paper possible problems of conditioning its realizations are discussed. A statistical test to recognize whether the observations are well embedded in their simulated neighborhood or not is developed. A new simulated annealing method, phase annealing (PA), is presented which makes it possible to avoid poor embedding of observations. PA is based on the Fourier representation of the spatial field. Instead of the individual pixel values, phases corresponding to different Fourier components are modified (i.e. shifted) in order to match prescribed statistics. The method treats neighborhoods together and thus avoids singularities at observation locations. It is faster than SA and can be used for the simulation of high resolution fields. Examples demonstrate the applicability of the method.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Angular filter refractometry analysis using simulated annealing.
Angland, P; Haberberger, D; Ivancic, S T; Froula, D H
2017-10-01
Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.
A theoretical comparison of evolutionary algorithms and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
Crystallographic refinement by simulated annealing: Application to crambin
International Nuclear Information System (INIS)
Bruenger, A.T.; Yale Univ., New Haven, CT; Harvard Univ., Cambridge, MA; Karplus, M.; Petsko, G.A.
1989-01-01
A detailed description of the method of crystallographic refinement by simulated annealing is presented. To test the method, it has been applied to a 1.5 A resolution X-ray structure of crambin. The dependence of the success of the simulated annealing protocol with respect to the temperature of the heating stage is discussed. Optimal success is achieved at relatively high temperatures. Regardless of the protocol used, the molecular-dynamics refined structure always yields an improved R factor compared with restrained least-squares refinement without manual re-fitting. The differences between the various refined structures and the corresponding electron density maps are discussed. (orig.)
Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing
Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.
2006-01-01
The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
The afforestation problem: a heuristic method based on simulated annealing
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1992-01-01
This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....
Physical Mapping Using Simulated Annealing and Evolutionary Algorithms
DEFF Research Database (Denmark)
Vesterstrøm, Jacob Svaneborg
2003-01-01
Physical mapping (PM) is a method of bioinformatics that assists in DNA sequencing. The goal is to determine the order of a collection of fragments taken from a DNA strand, given knowledge of certain unique DNA markers contained in the fragments. Simulated annealing (SA) is the most widely used...
Molecular dynamics simulation of annealed ZnO surfaces
Energy Technology Data Exchange (ETDEWEB)
Min, Tjun Kit; Yoon, Tiem Leong [School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Lim, Thong Leng [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia)
2015-04-24
The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.
Particle Based Image Segmentation with Simulated Annealing
Everts, M.H.; Bekker, H.; Jalba, A.C.; Roerdink, J.B.T.M.
2007-01-01
The Charged Particle Model (CPM) is a physically motivated deformable model for shape recovery and segmentation. It simulates a system of charged particles moving in an electric field generated from the input image, whose positions in the equilibrium state are used for curve or surface
Selection of views to materialize using simulated annealing algorithms
Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin
2002-03-01
A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.
Metode Simulated Annealing untuk Optimasi Penjadwalan Perkuliahan Perguruan Tinggi
Directory of Open Access Journals (Sweden)
Wiktasari Sari
2016-12-01
Full Text Available Course scheduling an assignment of courses and lecturers in the available time slots involving certain restrictions. Simulated annealing is a heuristic method can be used as search method and provide acceptable solutions with good results. The research aims to make scheduling courses at the college using simulated annealing using five variables data that lecturer courses, the time slot is comprised of the day and the time period and class room. The research has two objective functions to be generated, the first is the assignment of a lecturer on courses that will be of teaching, second lecturers and their assignment course on the time slot and the room available. The objective function is calculated by taking into account the restrictions involved to produce the optimal solution. The validation is performed by testing to simulated annealing method with an varian average of 77.791% of the data variance can reach a solution with a standard deviation of 3.931509. In this research given the method of solution in the use of the remaining search space to be reused by the data that is unallocated.
Simulated annealing band selection approach for hyperspectral imagery
Chang, Yang-Lang; Fang, Jyh-Perng; Hsu, Wei-Lieh; Chang, Lena; Chang, Wen-Yen
2010-09-01
In hyperspectral imagery, greedy modular eigenspace (GME) was developed by clustering highly correlated bands into a smaller subset based on the greedy algorithm. Unfortunately, GME is hard to find the optimal set by greedy scheme except by exhaustive iteration. The long execution time has been the major drawback in practice. Accordingly, finding the optimal (or near-optimal) solution is very expensive. Instead of adopting the band-subset-selection paradigm underlying this approach, we introduce a simulated annealing band selection (SABS) approach, which takes sets of non-correlated bands for high-dimensional remote sensing images based on a heuristic optimization algorithm, to overcome this disadvantage. It utilizes the inherent separability of different classes embedded in high-dimensional data sets to reduce dimensionality and formulate the optimal or near-optimal GME feature. Our proposed SABS scheme has a number of merits. Unlike traditional principal component analysis, it avoids the bias problems that arise from transforming the information into linear combinations of bands. SABS can not only speed up the procedure to simultaneously select the most significant features according to the simulated annealing optimization scheme to find GME sets, but also further extend the convergence abilities in the solution space based on simulated annealing method to reach the global optimal or near-optimal solution and escape from local minima. The effectiveness of the proposed SABS is evaluated by NASA MODIS/ASTER (MASTER) airborne simulator data sets and airborne synthetic aperture radar images for land cover classification during the Pacrim II campaign. The performance of our proposed SABS is validated by supervised k-nearest neighbor classifier. The experimental results show that SABS is an effective technique of band subset selection and can be used as an alternative to the existing dimensionality reduction method.
Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem
Directory of Open Access Journals (Sweden)
Jin Qin
2012-01-01
Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.
Optimisation of electron beam characteristics by simulated annealing
International Nuclear Information System (INIS)
Ebert, M.A.; University of Adelaide, SA; Hoban, P.W.
1996-01-01
Full text: With the development of technology in the field of treatment beam delivery, the possibility of tailoring radiation beams (via manipulation of the beam's phase space) is foreseeable. This investigation involved evaluating a method for determining the characteristics of pure electron beams which provided dose distributions that best approximated desired distributions. The aim is to determine which degrees of freedom are advantageous and worth pursuing in a clinical setting. A simulated annealing routine was developed to determine optimum electron beam characteristics. A set of beam elements are defined at the surface of a homogeneous water equivalent phantom defining discrete positions and angles of incidence, and electron energies. The optimal weighting of these elements is determined by the (generally approximate) solution to the linear equation, Dw = d, where d represents the dose distribution calculated over the phantom, w the vector of (50 - 2x10 4 ) beam element relative weights, and D a normalised matrix of dose deposition kernels. In the iterative annealing procedure, beam elements are randomly selected and beam weighting distributions are sampled and used to perturb the selected elements. Perturbations are accepted or rejected according to standard simulated annealing criteria. The result (after the algorithm has terminated due to meeting an iteration or optimisation specification) is an approximate solution for the beam weight vector (w) specified by the above equation. This technique has been applied for several sample dose distributions and phase space restrictions. An example is given of the phase space obtained when endeavouring to conform to a rectangular 100% dose region with polyenergetic though normally incident electrons. For regular distributions, intuitive conclusions regarding the benefits of energy/angular manipulation may be made, whereas for complex distributions, variations in intensity over beam elements of varying energy and
Sparse approximation problem: how rapid simulated annealing succeeds and fails
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Simulated Annealing-Based Krill Herd Algorithm for Global Optimization
Directory of Open Access Journals (Sweden)
Gai-Ge Wang
2013-01-01
Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
spsann - optimization of sample patterns using spatial simulated annealing
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Directory of Open Access Journals (Sweden)
Hailong Wang
2018-01-01
Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.
Geometric Optimization of Thermo-electric Coolers Using Simulated Annealing
Khanh, D. V. K.; Vasant, P. M.; Elamvazuthi, I.; Dieu, V. N.
2015-09-01
The field of thermo-electric coolers (TECs) has grown drastically in recent years. In an extreme environment as thermal energy and gas drilling operations, TEC is an effective cooling mechanism for instrument. However, limitations such as the relatively low energy conversion efficiency and ability to dissipate only a limited amount of heat flux may seriously damage the lifetime and performance of the instrument. Until now, many researches were conducted to expand the efficiency of TECs. The material parameters are the most significant, but they are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of finding the optimal TECs design is to define a set of design parameters. In this paper, a new method of optimizing the dimension of TECs using simulated annealing (SA), to maximize the rate of refrigeration (ROR) was proposed. Equality constraint and inequality constraint were taken into consideration. This work reveals that SA shows better performance than Cheng's work.
Optimization of multiple-layer microperforated panels by simulated annealing
DEFF Research Database (Denmark)
Ruiz Villamil, Heidi; Cobo, Pedro; Jacobsen, Finn
2011-01-01
Sound absorption by microperforated panels (MPP) has received increasing attention the past years as an alternative to conventional porous absorbers in applications with special cleanliness and health requirements. The absorption curve of an MPP depends on four parameters: the holes diameter...... applications. However, when a wider absorption frequency band is required, it is necessary to design multiple-layer MPP (ML-MPP). The design of an N-layers MPP depends on 4N parameters. Consequently, the tuning of an optimal ML-MPP by exhaustive search within a prescribed frequency band becomes impractical....... Therefore, simulated annealing is proposed in this paper as a tool to solve the optimization problem of finding the best combination of the constitutive parameters of an ML-MPP providing the maximum average absorption within a prescribed frequency band....
A simulated annealing approach for redesigning a warehouse network problem
Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia
2017-09-01
Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.
Enhanced Simulated Annealing for Solving Aggregate Production Planning
Directory of Open Access Journals (Sweden)
Mohd Rizam Abu Bakar
2016-01-01
Full Text Available Simulated annealing (SA has been an effective means that can address difficulties related to optimisation problems. SA is now a common discipline for research with several productive applications such as production planning. Due to the fact that aggregate production planning (APP is one of the most considerable problems in production planning, in this paper, we present multiobjective linear programming model for APP and optimised by SA. During the course of optimising for the APP problem, it uncovered that the capability of SA was inadequate and its performance was substandard, particularly for a sizable controlled APP problem with many decision variables and plenty of constraints. Since this algorithm works sequentially then the current state will generate only one in next state that will make the search slower and the drawback is that the search may fall in local minimum which represents the best solution in only part of the solution space. In order to enhance its performance and alleviate the deficiencies in the problem solving, a modified SA (MSA is proposed. We attempt to augment the search space by starting with N+1 solutions, instead of one solution. To analyse and investigate the operations of the MSA with the standard SA and harmony search (HS, the real performance of an industrial company and simulation are made for evaluation. The results show that, compared to SA and HS, MSA offers better quality solutions with regard to convergence and accuracy.
Directory of Open Access Journals (Sweden)
I Gede Agus Widyadana
2002-01-01
Full Text Available The research is focused on comparing Genetics algorithm and Simulated Annealing in the term of performa and processing time. The main purpose is to find out performance both of the algorithm to solve minimizing makespan and total flowtime in a particular flowshop system. Performances of the algorithms are found by simulating problems with variation of jobs and machines combination. The result show the Simulated Annealing is much better than the Genetics up to 90%. The Genetics, however, only had score in processing time, but the trend that plotted suggest that in problems with lots of jobs and lots of machines, the Simulated Annealing will run much faster than the Genetics. Abstract in Bahasa Indonesia : Penelitian ini difokuskan pada pembandingan algoritma Genetika dan Simulated Annealing ditinjau dari aspek performa dan waktu proses. Tujuannya adalah untuk melihat kemampuan dua algoritma tersebut untuk menyelesaikan problem-problem penjadwalan flow shop dengan kriteria minimasi makespan dan total flowtime. Kemampuan kedua algoritma tersebut dilihat dengan melakukan simulasi yang dilakukan pada kombinasi-kombinasi job dan mesin yang berbeda-beda. Hasil simulasi menunjukan algoritma Simulated Annealing lebih unggul dari algoritma Genetika hingga 90%, algoritma Genetika hanya unggul pada waktu proses saja, namun dengan tren waktu proses yang terbentuk, diyakini pada problem dengan kombinasi job dan mesin yang banyak, algoritma Simulated Annealing dapat lebih cepat daripada algoritma Genetika. Kata kunci: Algoritma Genetika, Simulated Annealing, flow shop, makespan, total flowtime.
Screening technique for loading pattern optimization by simulated annealing
International Nuclear Information System (INIS)
Park, Tong Kyu; Kim, Chang Hyo; Lee, Hyun Chul; Joo, Hyung Kook
2005-01-01
Lots of efforts have been devoted to developing the fuel assembly (FA) loading pattern (LP) optimization code using various optimization algorithms. Among them the simulated annealing (SA) algorithm appears very promising because of its robustness in the optimization calculations. However, SA algorithm has a major drawback of long computing time because it requires the neutronics evaluation of several tens of thousands of the trial LPs in the course of the optimization. In order to reduce computing time, a simple two-dimensional (2D) neutronics evaluation model has been used. Unfortunately, however, the final LP obtained from the 2D SA calculation often turns out to be unsatisfactory when it was evaluated by 3D neutronics evaluation model. A simple and straightforward way of resolving this problem would be to adopt 3D evaluation model instead of 2D model during the optimization procedure but this would take a long computing time. In this paper we propose a screening technique based on 2D evaluation model aimed at reducing computing time in SA calculation with 3D neutronics evaluation model
Simulated Annealing Technique for Routing in a Rectangular Mesh Network
Directory of Open Access Journals (Sweden)
Noraziah Adzhar
2014-01-01
Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.
Finding a Hadamard matrix by simulated annealing of spin vectors
Bayu Suksmono, Andriyan
2017-05-01
Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.
Sensitivity study on hydraulic well testing inversion using simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.
Adaptive Sampling in Hierarchical Simulation
Energy Technology Data Exchange (ETDEWEB)
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.
Liang, Faming
2014-04-03
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.
Calculation of the director configuration of nematic liquid crystals by the simulated-anneal method
Heynderickx, I.; Raedt, H. De
1988-01-01
A new procedure for computing the equilibrium director pattern in a liquid-crystal-display cell subjected to an applied voltage is presented. It uses the simulated-anneal method which is based on the Metropolis Monte Carlo algorithm. The usefulness of the technique is illustrated by the simulation
International Nuclear Information System (INIS)
Komarov, F.F.; Komarov, A.F.; Mironov, A.M.; Makarevich, Yu.V.; Miskevich, S.A.; Zayats, G.M.
2011-01-01
Physical and mathematical models and numerical simulation of the diffusion of implanted impurities during rapid thermal treatment of silicon structures are discussed. The calculation results correspond to the experimental results with a sufficient accuracy. A simulation software system has been developed that is integrated into ATHENA simulation system developed by Silvaco Inc. This program can simulate processes of the low-energy implantation of B, BF 2 , P, As, Sb, C ions into the silicon structures and subsequent rapid thermal annealing. (authors)
Directory of Open Access Journals (Sweden)
Fayçal Chabni
2017-09-01
Full Text Available Harmonic pollution is a very common issue in the field of power electronics, Harmonics can cause multiple problems for power converters and electrical loads alike, this paper introduces a modulation method called selective harmonic elimination pulse width modulation (SHEPWM, this method allows the elimination of a specific order of harmonics and also control the amplitude of the fundamental component of the output voltage. In this work SHEPWM strategy is applied to a five level cascade inverter. The objective of this study is to demonstrate the total control provided by the SHEPWM strategy over any rank of harmonics using the simulated annealing optimization algorithm and also control the amplitude of the fundamental component at any desired value. Simulation and experimental results are presented in this work.
DEFF Research Database (Denmark)
Sousa, Tiago M; Morais, Hugo; Castro, R.
2014-01-01
An intensive use of dispersed energy resources is expected for future power systems, including distributed generation, especially based on renewable sources, and electric vehicles. The system operation methods and tool must be adapted to the increased complexity, especially the optimal resource...... to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach...
Directory of Open Access Journals (Sweden)
Bili Chen
2014-01-01
Full Text Available An enhanced differential evolution based algorithm, named multi-objective differential evolution with simulated annealing algorithm (MODESA, is presented for solving multiobjective optimization problems (MOPs. The proposed algorithm utilizes the advantage of simulated annealing for guiding the algorithm to explore more regions of the search space for a better convergence to the true Pareto-optimal front. In the proposed simulated annealing approach, a new acceptance probability computation function based on domination is proposed and some potential solutions are assigned a life cycle to have a priority to be selected entering the next generation. Moreover, it incorporates an efficient diversity maintenance approach, which is used to prune the obtained nondominated solutions for a good distributed Pareto front. The feasibility of the proposed algorithm is investigated on a set of five biobjective and two triobjective optimization problems and the results are compared with three other algorithms. The experimental results illustrate the effectiveness of the proposed algorithm.
Optimization of pressurized water reactor shuffling by simulated annealing with heuristics
International Nuclear Information System (INIS)
Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.
1995-01-01
Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed
DEFF Research Database (Denmark)
Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup
2011-01-01
The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology...... of the networks. There have been many use of evolutionary algorithms to solve the problems which are in combinatory complexity nature, and extremely hard to solve by exact approaches. Both Genetic and Simulated annealing algorithms are similar in using controlled stochastic method to search the solution....... The paper combines the algorithms in order to analyze the impact of implementation performance....
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Ellaby, Tom; Aarons, Jolyon; Varambhia, Aakash; Jones, Lewys; Nellist, Peter; Ozkaya, Dogan; Sarwar, Misbah; Thompsett, David; Skylaris, Chris-Kriton
2018-04-01
Platinum nanoparticles find significant use as catalysts in industrial applications such as fuel cells. Research into their design has focussed heavily on nanoparticle size and shape as they greatly influence activity. Using high throughput, high precision electron microscopy, the structures of commercially available Pt catalysts have been determined, and we have used classical and quantum atomistic simulations to examine and compare them with geometric cuboctahedral and truncated octahedral structures. A simulated annealing procedure was used both to explore the potential energy surface at different temperatures, and also to assess the effect on catalytic activity that annealing would have on nanoparticles with different geometries and sizes. The differences in response to annealing between the real and geometric nanoparticles are discussed in terms of thermal stability, coordination number and the proportion of optimal binding sites on the surface of the nanoparticles. We find that annealing both experimental and geometric nanoparticles results in structures that appear similar in shape and predicted activity, using oxygen adsorption as a measure. Annealing is predicted to increase the catalytic activity in all cases except the truncated octahedra, where it has the opposite effect. As our simulations have been performed with a classical force field, we also assess its suitability to describe the potential energy of such nanoparticles by comparing with large scale density functional theory calculations.
Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Directory of Open Access Journals (Sweden)
Juan Frausto-Solis
2016-01-01
Full Text Available A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP instances. This new approach has four phases: (i Multiquenching Phase (MQP, (ii Boltzmann Annealing Phase (BAP, (iii Bose-Einstein Annealing Phase (BEAP, and (iv Dynamical Equilibrium Phase (DEP. BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
A kinetic Monte Carlo annealing assessment of the dominant features from ion implant simulations
International Nuclear Information System (INIS)
Martin-Bragado, I.; Jaraiz, M.; Castrillo, P.; Pinacho, R.; Rubio, J.E.; Barbolla, J.
2004-01-01
Ion implantation and subsequent annealing are essential stages in today's advanced CMOS processing. Although the dopant implanted profile can be accurately predicted by analytical fits calibrated with SIMS profiles, the damage has to be estimated with a binary collision approximation implant simulator. Some models have been proposed, like the '+n', in an attempt to simplify the anneal simulation. We have used the atomistic kinetic Monte Carlo dados to elucidate which are the implant modeling features most relevant in the simulation of transient enhanced diffusion (TED). For the experimental conditions studied we find that the spatial correlation of the I, V Frenkel pairs is not critical in order to yield the correct I supersaturation, that can be simulated just taking into account the net I-V excess distribution. In contrast to, simulate impurity clustering/deactivation when there is an impurity concentration comparable to the net I-V excess, the full I and V profiles have to be used
A simulated annealing-based method for learning Bayesian networks from statistical data
Czech Academy of Sciences Publication Activity Database
Janžura, Martin; Nielsen, Jan
2006-01-01
Roč. 21, č. 3 (2006), s. 335-348 ISSN 0884-8173 R&D Projects: GA ČR GA201/03/0478 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian network * simulated annealing * Markov Chain Monte Carlo Subject RIV: BA - General Mathematics Impact factor: 0.429, year: 2006
A Simulated Annealing Algorithm for Maximum Common Edge Subgraph Detection in Biological Networks
DEFF Research Database (Denmark)
Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik
2016-01-01
introduce a heuristic algorithm for the multiple maximum common edge subgraph problem that is able to detect large common substructures shared across multiple, real-world size networks efficiently. Our algorithm uses a combination of iterated local search, simulated annealing and a pheromone...... apply it to unravel a biochemical backbone inherent in different species, modeled as multiple maximum common subgraphs....
Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers
Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.
Improving Simulated Annealing by Recasting it as a Non-Cooperative Game
Wolpert, David; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.
Simulated Annealing Genetic Algorithm Based Schedule Risk Management of IT Outsourcing Project
Directory of Open Access Journals (Sweden)
Fuqiang Lu
2017-01-01
Full Text Available IT outsourcing is an effective way to enhance the core competitiveness for many enterprises. But the schedule risk of IT outsourcing project may cause enormous economic loss to enterprise. In this paper, the Distributed Decision Making (DDM theory and the principal-agent theory are used to build a model for schedule risk management of IT outsourcing project. In addition, a hybrid algorithm combining simulated annealing (SA and genetic algorithm (GA is designed, namely, simulated annealing genetic algorithm (SAGA. The effect of the proposed model on the schedule risk management problem is analyzed in the simulation experiment. Meanwhile, the simulation results of the three algorithms GA, SA, and SAGA show that SAGA is the most superior one to the other two algorithms in terms of stability and convergence. Consequently, this paper provides the scientific quantitative proposal for the decision maker who needs to manage the schedule risk of IT outsourcing project.
Synthesis of optimal digital shapers with arbitrary noise using simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Regadío, Alberto, E-mail: aregadio@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain); Sánchez-Prieto, Sebastián, E-mail: sebastian.sanchez@uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Tabero, Jesús, E-mail: taberogj@inta.es [Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain)
2014-02-21
This paper presents the structure, design and implementation of a new way of determining the optimal shaping in time-domain for spectrometers by means of simulated annealing. The proposed algorithm is able to adjust automatically and in real-time the coefficients for shaping an input signal. A practical prototype was designed, implemented and tested on a PowerPC 405 embedded in a Field Programmable Gate Array (FPGA). Lastly, its performance and capabilities were measured using simulations and a neutron monitor.
Phase diagram of 2D Hubbard model by simulated annealing mean field approximation
International Nuclear Information System (INIS)
Kato, Masaru; Kitagaki, Takashi
1991-01-01
In order to investigate the stable magnetic structure of the Hubbard model on a square lattice, we utilize the dynamical simulated annealing method which proposed by R. Car and M. Parrinello. Results of simulations on a 10 x 10 lattice system with 80 electrons under assumption of collinear magnetic structure that the most stable state is incommensurate spin density wave state with periodic domain wall. (orig.)
Experiences with serial and parallel algorithms for channel routing using simulated annealing
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
Cascade annealing: an overview
International Nuclear Information System (INIS)
Doran, D.G.; Schiffgens, J.O.
1976-04-01
Concepts and an overview of radiation displacement damage modeling and annealing kinetics are presented. Short-term annealing methodology is described and results of annealing simulations performed on damage cascades generated using the Marlowe and Cascade programs are included. Observations concerning the inconsistencies and inadequacies of current methods are presented along with simulation of high energy cascades and simulation of longer-term annealing
Directory of Open Access Journals (Sweden)
Michael Brusco
Full Text Available A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.
Brusco, Michael; Stolze, Hannah J; Hoffman, Michaela; Steinley, Douglas
2017-01-01
A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.
Directory of Open Access Journals (Sweden)
Xiaohui Gao
2018-01-01
Full Text Available There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA is proposed by taking full advantage of differential evolution algorithm’s fast convergence and simulated annealing algorithm’s jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.
Gao, Xiaohui; Liu, Yongguang
2018-01-01
There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA) and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A) model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA) is proposed by taking full advantage of differential evolution algorithm's fast convergence and simulated annealing algorithm's jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.
Cloudsdale, Ian S; Dickson, John K; Barta, Thomas E; Grella, Brian S; Smith, Emilie D; Kulp, John L; Guarnieri, Frank; Kulp, John L
2017-08-01
We have applied simulated annealing of chemical potential (SACP) to a diverse set of ∼150 very small molecules to provide insights into new interactions in the binding pocket of human renin, a historically difficult target for which to find low molecular weight (MW) inhibitors with good bioavailability. In one of its many uses in drug discovery, SACP provides an efficient, thermodynamically principled method of ranking chemotype replacements for scaffold hopping and manipulating physicochemical characteristics for drug development. We introduce the use of Constrained Fragment Analysis (CFA) to construct and analyze ligands composed of linking those fragments with predicted high affinity. This technique addresses the issue of effectively linking fragments together and provides a predictive mechanism to rank order prospective inhibitors for synthesis. The application of these techniques to the identification of novel inhibitors of human renin is described. Synthesis of a limited set of designed compounds provided potent, low MW analogs (IC 50 s20-58%). Copyright © 2017 Elsevier Ltd. All rights reserved.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Directory of Open Access Journals (Sweden)
Hayder Amer
2016-06-01
Full Text Available Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
Energy Technology Data Exchange (ETDEWEB)
Huang, M.; Supek, S.; Aine, C.
1996-06-01
Empirical neuromagnetic studies have reported that multiple brain regions are active at single instants in time as well as across time intervals of interest. Determining the number of active regions, however, required a systematic search across increasing model orders using reduced chi-square measure of goodness-of-fit and multiple starting points within each model order assumed. Simulated annealing was recently proposed for noiseless biomagnetic data as an effective global minimizer. A modified cost function was also proposed to effectively deal with an unknown number of dipoles for noiseless, multi-source biomagnetic data. Numerical simulation studies were conducted using simulated annealing to examine effects of a systematic increase in model order using both reduced chi-square as a cost function as well as a modified cost function, and effects of overmodeling on parameter estimation accuracy. Effects of different choices of weighting factors are also discussed. Simulated annealing was also applied to visually evoked neuromagnetic data and the effectiveness of both cost functions in determining the number of active regions was demonstrated.
Registration of range data using a hybrid simulated annealing and iterative closest point algorithm
Energy Technology Data Exchange (ETDEWEB)
LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM
2000-04-17
The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.
Fast and accurate protein substructure searching with simulated annealing and GPUs
Directory of Open Access Journals (Sweden)
Stivala Alex D
2010-09-01
Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.
Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing
International Nuclear Information System (INIS)
Menin, O.H.; Martinez, A.S.; Costa, A.M.
2016-01-01
A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. - Highlights: • X-ray spectra reconstruction from attenuation data using generalized simulated annealing. • Algorithm employs a smoothing regularization function, and sets the initial acceptance and visitation temperatures. • Algorithmic is automated by standardizing the terms of the objective function. • Algorithm is compared with classical methods.
A study on three dimensional layout design by the simulated annealing method
International Nuclear Information System (INIS)
Jang, Seung Ho
2008-01-01
Modern engineered products are becoming increasingly complicated and most consumers prefer compact designs. Layout design plays an important role in many engineered products. The objective of this study is to suggest a method to apply the simulated annealing method to the arbitrarily shaped three-dimensional component layout design problem. The suggested method not only optimizes the packing density but also satisfies constraint conditions among the components. The algorithm and its implementation as suggested in this paper are extendable to other research objectives
Feng, Yingang
2017-01-01
The use of NMR methods to determine the three-dimensional structures of carbohydrates and glycoproteins is still challenging, in part because of the lack of standard protocols. In order to increase the convenience of structure determination, the topology and parameter files for carbohydrates in the program Crystallography & NMR System (CNS) were investigated and new files were developed to be compatible with the standard simulated annealing protocols for proteins and nucleic acids. Recalculat...
Douglas, Julie A.; Sandefur, Conner I.
2008-01-01
In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a ge...
Optimal design of a DC MHD pump by simulated annealing method
Directory of Open Access Journals (Sweden)
Bouali Khadidja
2014-01-01
Full Text Available In this paper a design methodology of a magnetohydrodynamic pump is proposed. The methodology is based on direct interpretation of the design problem as an optimization problem. The simulated annealing method is used for an optimal design of a DC MHD pump. The optimization procedure uses an objective function which can be the minimum of the mass. The constraints are both of geometrics and electromagnetic in type. The obtained results are reported.
A GPU implementation of the Simulated Annealing Heuristic for the Quadratic Assignment Problem
Paul, Gerald
2012-01-01
The quadratic assignment problem (QAP) is one of the most difficult combinatorial optimization problems. An effective heuristic for obtaining approximate solutions to the QAP is simulated annealing (SA). Here we describe an SA implementation for the QAP which runs on a graphics processing unit (GPU). GPUs are composed of low cost commodity graphics chips which in combination provide a powerful platform for general purpose parallel computing. For SA runs with large numbers of iterations, we fi...
Direct comparison of quantum and simulated annealing on a fully connected Ising ferromagnet
Wauters, Matteo M.; Fazio, Rosario; Nishimori, Hidetoshi; Santoro, Giuseppe E.
2017-08-01
We compare the performance of quantum annealing (QA, through Schrödinger dynamics) and simulated annealing (SA, through a classical master equation) on the p -spin infinite range ferromagnetic Ising model, by slowly driving the system across its equilibrium, quantum or classical, phase transition. When the phase transition is second order (p =2 , the familiar two-spin Ising interaction) SA shows a remarkable exponential speed-up over QA. For a first-order phase transition (p ≥3 , i.e., with multispin Ising interactions), in contrast, the classical annealing dynamics appears to remain stuck in the disordered phase, while we have clear evidence that QA shows a residual energy which decreases towards zero when the total annealing time τ increases, albeit in a rather slow (logarithmic) fashion. This is one of the rare examples where a limited quantum speedup, a speedup by QA over SA, has been shown to exist by direct solutions of the Schrödinger and master equations in combination with a nonequilibrium Landau-Zener analysis. We also analyze the imaginary-time QA dynamics of the model, finding a 1 /τ2 behavior for all finite values of p , as predicted by the adiabatic theorem of quantum mechanics. The Grover-search limit p (odd )=∞ is also discussed.
Defect production in simulated cascades: cascade quenching and short-term annealing
International Nuclear Information System (INIS)
Heinisch, H.L.
1982-01-01
Defect production in high energy displacement cascades has been modeled using the computer code MARLOWE to generate the cascades and the stochastic computer code ALSOME to simulate the cascade quenching and short-term annealing of isolated cascades. The quenching is accomplished by using ALSOME with exaggerated values for defect mobilities and critical reaction distanes for recombination and clustering, which are in effect until the number of defect pairs is equal to the value determined from resistivity experiments at 4K. Then normal mobilities and reaction distances are used during short-term annealing to a point representative of Stage III recovery. Effects of cascade interactions at low fluences are also being investigated. The quenching parameter values were empirically determined for 30 keV cascades. The results agree well with experimental information throughout the range from 1 keV to 100 keV. Even after quenching and short-term annealing the high energy cascades behave as a collection of lower energy subcascades and lobes. Cascades generated in a crystal having thermal displacements were found to be in better agreement with experiments after quenching and annealing than those generated in a non-thermal crystal
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Optimization of patch antennas via multithreaded simulated annealing based design exploration
Directory of Open Access Journals (Sweden)
James E. Richie
2017-10-01
Full Text Available In this paper, we present a new software framework for the optimization of the design of microstrip patch antennas. The proposed simulation and optimization framework implements a simulated annealing algorithm to perform design space exploration in order to identify the optimal patch antenna design. During each iteration of the optimization loop, we employ the popular MEEP simulation tool to evaluate explored design solutions. To speed up the design space exploration, the software framework is developed to run multiple MEEP simulations concurrently. This is achieved using multithreading to implement a manager-workers execution strategy. The number of worker threads is the same as the number of cores of the computer that is utilized. Thus, the computational runtime of the proposed software framework enables effective design space exploration. Simulations demonstrate the effectiveness of the proposed software framework.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Simulated annealing algorithm for solving chambering student-case assignment problem
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling
Directory of Open Access Journals (Sweden)
A. Bonilla-Petriciolet
2007-03-01
Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.
Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xu Mingji
2017-01-01
Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Fully Adaptive Radar Modeling and Simulation Development
2017-04-01
AFRL-RY-WP-TR-2017-0074 FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT Kristine L. Bell and Anthony Kellems Metron, Inc...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE I REPORT. Approved for public release; distribution unlimited. See additional restrictions...2017 4. TITLE AND SUBTITLE FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT 5a. CONTRACT NUMBER FA8650-16-M-1774 5b. GRANT NUMBER 5c
Temporary Workforce Planning with Firm Contracts: A Model and a Simulated Annealing Heuristic
Directory of Open Access Journals (Sweden)
Muhammad Al-Salamah
2011-01-01
Full Text Available The aim of this paper is to introduce a model for temporary staffing when temporary employment is managed by firm contracts and to propose a simulated annealing-based method to solve the model. Temporary employment is a policy frequently used to adjust the working hour capacity to fluctuating demand. Temporary workforce planning models have been unnecessarily simplified to account for only periodic hiring and laying off; a company can review its workforce requirement every period and make hire-fire decisions accordingly, usually with a layoff cost. We present a more realistic temporary workforce planning model that assumes a firm contract between the worker and the company, which can extend to several periods. The model assumes the traditional constraints, such as inventory balance constraints, worker availability, and labor hour mix. The costs are the inventory holding cost, training cost of the temporary workers, and the backorder cost. The mixed integer model developed for this case has been found to be difficult to solve even for small problem sizes; therefore, a simulated annealing algorithm is proposed to solve the mixed integer model. The performance of the SA algorithm is compared with the CPLEX solution.
IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD
Directory of Open Access Journals (Sweden)
A. S. Potapov
2014-09-01
Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.
Energy Technology Data Exchange (ETDEWEB)
Talukdar, M.S.; Torsaeter, O. [Department of Petroleum Engineering and Applied Geophysics, Norwegian University of Science and Technology, Trondheim (Norway)
2002-05-01
We report the stochastic reconstruction of chalk pore networks from limited morphological information that may be readily extracted from 2D backscatter electron (BSE) images of the pore space. The reconstruction technique employs a simulated annealing (SA) algorithm, which can be constrained by an arbitrary number of morphological descriptors. Backscatter electron images of a high-porosity North Sea chalk sample are analyzed and the morphological descriptors of the pore space are determined. The morphological descriptors considered are the void-phase two-point probability function and lineal path function computed with or without the application of periodic boundary conditions (PBC). 2D and 3D samples have been reconstructed with different combinations of the descriptors and the reconstructed pore networks have been analyzed quantitatively to evaluate the quality of reconstructions. The results demonstrate that simulated annealing technique may be used to reconstruct chalk pore networks with reasonable accuracy using the void-phase two-point probability function and/or void-phase lineal path function. Void-phase two-point probability function produces slightly better reconstruction than the void-phase lineal path function. Imposing void-phase lineal path function results in slight improvement over what is achieved by using the void-phase two-point probability function as the only constraint. Application of periodic boundary conditions appears to be not critically important when reasonably large samples are reconstructed.
Solving extra-high-order Rubikʼs Cube problem by a dynamic simulated annealing
Chen, Xi; Ding, Z. J.
2012-08-01
A Monte Carlo algorithm, dynamic simulated annealing, is developed to solve Rubik's Cube problem at any extra-high order with considerable efficiency. By designing appropriate energy function, cooling schedule and neighborhood search algorithm, a sequence of moves can select a path to decrease quickly the degree of disorder of a cube and jump out local energy minima in a simple but effective way. Different from the static simulated annealing method that adjusting the temperature parameter in Boltzmann function, we use a dynamic procedure by altering energy function expression instead. In addition, a solution of low-order cube is devised to be used for high efficient parallel programming for high-order cubes. An extra-high-order cube can then be solved in a relatively short time, which is merely proportional to the square of order. Example calculations cost 996.6 s for a 101-order on a PC, and 1877 s for a 5001-order using parallel program on a supercomputer with 8 nodes. The principle behind this feasible solution of Rubik's Cube at any high order, like the methods of partial stages, the way to design the proper energy function, the means to find a neighborhood search that matches the energy function, may be useful to other global optimization problems which avoiding tremendous local minima in energy landscape is chief task.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)
2015-02-03
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
International Nuclear Information System (INIS)
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-01-01
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system
International Nuclear Information System (INIS)
Mahlers, Y.P.
2002-01-01
An algorithm is developed to determine directly all the parameters of the optimal equilibrium cycle. The core reload scheme is described by discrete variables, while the cycle length, as well as uranium enrichment and loading of burnable poison in each feed fuel assembly, are treated as continuous variables. An important feature of the algorithm is that all these parameters are determined by the solution of one big optimization problem. To search for the best reload scheme, simulated annealing is applied. The optimum cycle length as well as uranium enrichment and loading of burnable poison in each feed fuel assembly are determined for each reload pattern examined using successive linear programming. The uranium enrichments and loadings of burnable poison are considered to be distinct in different feed fuel assemblies. The number of batches and their sizes are not fixed, and also determined by the algorithm. As the first step of the numerical investigation of the algorithm, a problem of feed fuel cost minimization for a target equilibrium cycle length and fixed batch sizes is considered. The algorithm developed is demonstrated to provide about 2% less feed fuel cost than the ordinary simulated annealing algorithm
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-02-01
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
Energy Technology Data Exchange (ETDEWEB)
Rao, R.; Buescher, K.L.; Hanagandi, V.
1995-12-31
In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.
Directory of Open Access Journals (Sweden)
Kai Moriguchi
2015-01-01
Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.
Time Simulation of Bone Adaptation
DEFF Research Database (Denmark)
Bagge, Mette
1998-01-01
The structural adaptation of a three-dimensional finite element model ofthe proximal femur is considered. Presuming the bone possesses the optimalstructure under the given loads, the bone material distribution is foundby minimizing the strain energy averaged over ten load cases with avolume...
International Nuclear Information System (INIS)
Liu, Minghua; Shi, Yong; Yan, Jiashu; Yan, Yuying
2017-01-01
Highlights: • A numerical capability combining the lattice Boltzmann method with simulated annealing algorithm is developed. • Digitized representations of random porous media are constructed using limited but meaningful statistical descriptors. • Pore-scale flow and heat transfer information in random porous media is obtained by the lattice Boltzmann simulation. • The effective properties at the representative elementary volume scale are well specified using appropriate upscale averaging. - Abstract: In this article, the lattice Boltzmann (LB) method for transport phenomena is combined with the simulated annealing (SA) algorithm for digitized porous-medium construction to study flow and heat transfer in random porous media. Importantly, in contrast to previous studies which simplify porous media as arrays of regularly shaped objects or effective pore networks, the LB + SA method in this article can model statistically meaningful random porous structures in irregular morphology, and simulate pore-scale transport processes inside them. Pore-scale isothermal flow and heat conduction in a set of constructed random porous media characterized by statistical descriptors were then simulated through use of the LB + SA method. The corresponding averages over the computational volumes and the related effective transport properties were also computed based on these pore scale numerical results. Good agreement between the numerical results and theoretical predictions or experimental data on the representative elementary volume scale was found. The numerical simulations in this article demonstrate combination of the LB method with the SA algorithm is a viable and powerful numerical strategy for simulating transport phenomena in random porous media in complex geometries.
International Nuclear Information System (INIS)
Visbal, Jorge H. Wilches; Costa, Alessandro M.
2016-01-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Energy Technology Data Exchange (ETDEWEB)
Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu
2001-08-01
This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)
An improved hybrid topology optimization approach coupling simulated annealing and SIMP (SA-SIMP)
International Nuclear Information System (INIS)
Garcia-Lopez, N P; Sanchez-Silva, M; Medaglia, A L; Chateauneuf, A
2010-01-01
The Solid Isotropic Material with Penalization (SIMP) methodology has been used extensively due to its versatility and ease of implementation. However, one of its main drawbacks is that resulting topologies exhibit areas of intermediate densities which lack any physical meaning. This paper presents a hybrid methodology which couples simulated annealing and SIMP (SA-SIMP) in order to achieve solutions which are stiffer and predominantly black and white. Under a look-ahead strategy, the algorithm gradually fixes or removes those elements whose density resulting from SIMP is intermediate. Different strategies for selecting and fixing the fractional elements are examined using benchmark examples, which show that topologies resulting from SA-SIMP are more rigid than SIMP and predominantly black and white.
Solving the Turbine Positioning Problem for Large Offshore Wind Farms by Simulated Annealing
DEFF Research Database (Denmark)
Rivas, Rajai Aghabi; Clausen, Jens; Hansen, Kurt Schaldemose
2009-01-01
is negligible while, as the wind farm's size reduces, the differences start becoming significant. A sensitivity analysis is also performed showing that greater density of turbines in the perimeter of the optimized wind farm reduces the wake losses even if the wind climate changes.......The current paper is concerned with determining the optimal layout of the turbines inside large offshore wind farms by means of an optimization algorithm. We call this the Turbine Positioning Problem. To achieve this goal a simulated annealing algorithm has been devised, where three types of local...... search operations are performed recursively until the system converges. The effectiveness of the proposed algorithm is demonstrated on a suite of real life test cases, including Horns Rev offshore wind farm. The results are verified using a commercial wind resource software indicating that this method...
Directory of Open Access Journals (Sweden)
M. Abdul-Niby
2016-04-01
Full Text Available The Traveling Salesman Problem (TSP is an integer programming problem that falls into the category of NP-Hard problems. As the problem become larger, there is no guarantee that optimal tours will be found within reasonable computation time. Heuristics techniques, like genetic algorithm and simulating annealing, can solve TSP instances with different levels of accuracy. Choosing which algorithm to use in order to get a best solution is still considered as a hard choice. This paper suggests domain reduction as a tool to be combined with any meta-heuristic so that the obtained results will be almost the same. The hybrid approach of combining domain reduction with any meta-heuristic encountered the challenge of choosing an algorithm that matches the TSP instance in order to get the best results.
A Simulated Annealing method to solve a generalized maximal covering location problem
Directory of Open Access Journals (Sweden)
M. Saeed Jabalameli
2011-04-01
Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.
Feng, Yingang
2017-01-01
The use of NMR methods to determine the three-dimensional structures of carbohydrates and glycoproteins is still challenging, in part because of the lack of standard protocols. In order to increase the convenience of structure determination, the topology and parameter files for carbohydrates in the program Crystallography & NMR System (CNS) were investigated and new files were developed to be compatible with the standard simulated annealing protocols for proteins and nucleic acids. Recalculating the published structures of protein-carbohydrate complexes and glycosylated proteins demonstrates that the results are comparable to the published structures which employed more complex procedures for structure calculation. Integrating the new carbohydrate parameters into the standard structure calculation protocol will facilitate three-dimensional structural study of carbohydrates and glycosylated proteins by NMR spectroscopy.
Habibulla, Yusupjan
2017-10-01
The minimal dominating set (MDS) problem is a prototypical hard combinatorial optimization problem. We recently studied this problem using the cavity method. Although we obtained a solution for a given graph that gives a very good estimation of the minimal dominating size, we do not know whether there is a ground state solution or how many solutions exist in the ground state. We have therefore continued to develop a one-step replica symmetry breaking theory to investigate the ground state energy of the MDS problem. First, we find that the solution space for the MDS problem exhibits both condensation transition and cluster transition on regular random graphs, and prove this using a simulated annealing dynamical process. Second, we develop a zero-temperature survey propagation algorithm on Erdős–Rényi random graphs to estimate the ground state energy, and obtain a survey propagation decimation algorithm that achieves results as good as the belief propagation decimation algorithm.
Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi
The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.
Simulated annealing in networks for computing possible arrangements for red and green cones
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
International Nuclear Information System (INIS)
Mhamdi, B.; Grayaa, K.; Aguili, T.
2011-01-01
In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.
Directory of Open Access Journals (Sweden)
N. Shivasankaran
2013-04-01
Full Text Available Scheduling problems are generally treated as NP andash; complete combinatorial optimization problems which is a multi-objective and multi constraint one. Repair shop Job sequencing and operator allocation is one such NP andash; complete problem. For such problems, an efficient technique is required that explores a wide range of solution space. This paper deals with Simulated Annealing Technique, a Meta - heuristic to solve the complex Car Sequencing and Operator Allocation problem in a car repair shop. The algorithm is tested with several constraint settings and the solution quality exceeds the results reported in the literature with high convergence speed and accuracy. This algorithm could be considered as quite effective while other heuristic routine fails.
A hybrid Tabu search-simulated annealing method to solve quadratic assignment problem
Directory of Open Access Journals (Sweden)
Mohamad Amin Kaviani
2014-06-01
Full Text Available Quadratic assignment problem (QAP has been considered as one of the most complicated problems. The problem is NP-Hard and the optimal solutions are not available for large-scale problems. This paper presents a hybrid method using tabu search and simulated annealing technique to solve QAP called TABUSA. Using some well-known problems from QAPLIB generated by Burkard et al. (1997 [Burkard, R. E., Karisch, S. E., & Rendl, F. (1997. QAPLIB–a quadratic assignment problem library. Journal of Global Optimization, 10(4, 391-403.], two methods of TABUSA and TS are both coded on MATLAB and they are compared in terms of relative percentage deviation (RPD for all instances. The performance of the proposed method is examined against Tabu search and the preliminary results indicate that the hybrid method is capable of solving real-world problems, efficiently.
Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization
Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)
2001-01-01
Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3
Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.
2015-12-01
Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could
Parallel adaptive simulations on unstructured meshes
International Nuclear Information System (INIS)
Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A
2007-01-01
This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers
International Nuclear Information System (INIS)
Wu, Wei; Jiang, Fangming
2013-01-01
We adapt the simulated annealing approach for reconstruction of the 3D microstructure of a LiCoO 2 cathode from a commercial Li-ion battery. The real size distribution curve of LiCoO 2 particles is applied to regulate the reconstruction process. By discretizing a 40 × 40 × 40 μm cathode volume with 8,000,000 numerical cubes, the cathode involving three individual phases: 1) LiCoO 2 as active material, 2) pores or electrolyte, and 3) additives (polyvinylidene fluoride + carbon black) is reconstructed. The microstructural statistical properties required in the reconstruction process are extracted from 2D focused ion beam/scanning electron microscopy images or obtained by analyzing the powder mixture used to make the cathode. Characterization of the reconstructed cathode gives important structural and transport properties including the two-point correlation functions, volume-specific surface area between phases, tortuosity and geometrical connectivity of individual phase. - Highlights: • Simulated annealing approach is adapted for 3D reconstruction of LiCoO 2 cathode. • Real size distribution of LiCoO 2 particles is applied in reconstruction process. • Reconstructed cathode accords with real one at important statistical properties. • Effective electrode-characterization approaches have been established. • Extensive characterization gives important structural properties, say, tortuosity
International Nuclear Information System (INIS)
Zameer, Aneela; Mirza, Sikander M.; Mirza, Nasir M.
2014-01-01
Highlights: • SA and GA based optimization for loading pattern has been carried out. • The LEOPARD and MCRAC codes for a typical PWR have been used. • At high annealing rates, the SA shows premature convergence. • Then novel crossover and mutation operators are proposed in this work. • Genetic Algorithms exhibit stagnation for small population sizes. - Abstract: A comparative study of the Simulated Annealing and Genetic Algorithms based optimization of loading pattern with power profile flattening as the goal, has been carried out using the LEOPARD and MCRAC neutronic codes, for a typical 300 MWe PWR. At high annealing rates, Simulated Annealing exhibited tendency towards premature convergence while at low annealing rates, it failed to converge to global minimum. The new ‘batch composition preserving’ Genetic Algorithms with novel crossover and mutation operators are proposed in this work which, consistent with the earlier findings (Yamamoto, 1997), for small population size, require comparable computational effort to Simulated Annealing with medium annealing rates. However, Genetic Algorithms exhibit stagnation for small population size. A hybrid Genetic Algorithms (Simulated Annealing) scheme is proposed that utilizes inner Simulated Annealing layer for further evolution of population at stagnation point. The hybrid scheme has been found to escape stagnation in bcp Genetic Algorithms and converge to the global minima with about 51% more computational effort for small population sizes
The behaviour of adaptive boneremodeling simulation models
Weinans, H.; Huiskes, R.; Grootenboer, H.J.
1992-01-01
The process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule applied to
International Nuclear Information System (INIS)
Zerda Lerner, Alberto de la
2004-01-01
Simulated annealing (SA) is a multivariate combinatorial optimization process that searches the configuration space of possible solutions by a random walk, guided only by the goal of minimization of the objective function. The decision-making capabilities of a fuzzy inference system are applied to guide the SA search, to look for solutions which, in addition to optimizing a plan in dosimetric terms, also present some clinically desirable spatial features. No a priori constraints are placed on the number or position of needles or on the seed loading sequence of individual needles. These additional degrees of freedom are balanced by giving preference to consider plans with seed distributions that are balanced in the right/left and anterior/posterior halves in each axial slice, and with local seed density that is about uniform. Piecewise linear membership functions are constructed to represent these requirements. Before a step in the random search is subject to the SA test, the expert functions representing the spatial seed-distribution requirements are evaluated. Thus, the expert planner's knowledge enters into the decision as to the ''goodness'' of a seed configuration regarding the spatial seed-distribution goals. When a step in the random walk yields a seed configuration that is found wanting, a specific number of additional steps in the local neighborhood is attempted until either improvement in the spatial requirements is achieved, or the allowed number of attempts is exhausted. In the latter case, the expert system desists and the unfavorable step is taken, moving on to the simulated annealing test. The number of attempts is determined by the fuzzy logic inference engine and depends on just how badly the expert requirement is not met. The program is interfaced with a commercial treatment planning system (TPS) to import optimized seed plans for isodose display and analysis. Execution in a 1.5 GHz computer is less than a minute, adequate for real-time planning
Energy Technology Data Exchange (ETDEWEB)
Chiapetto, M. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium); Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Becquart, C.S. [Unite Materiaux et Transformations (UMET), UMR 8207, Universite de Lille 1, ENSCL, Villeneuve d' Ascq (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Domain, C. [EDF R and D, Departement Materiaux et Mecanique des Composants, Les Renardieres, Moret sur Loing (France); Laboratoire commun EDF-CNRS, Etude et Modelisation des Microstructures pour le Vieillissement des Materiaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Mol (Belgium)
2015-01-01
Post-irradiation annealing experiments are often used to obtain clearer information on the nature of defects produced by irradiation. However, their interpretation is not always straightforward without the support of physical models. We apply here a physically-based set of parameters for object kinetic Monte Carlo (OKMC) simulations of the nanostructural evolution of FeMnNi alloys under irradiation to the simulation of their post-irradiation isochronal annealing, from 290 to 600 C. The model adopts a ''grey alloy'' scheme, i.e. the solute atoms are not introduced explicitly, only their effect on the properties of point-defect clusters is. Namely, it is assumed that both vacancy and SIA clusters are significantly slowed down by the solutes. The slowing down increases with size until the clusters become immobile. Specifically, the slowing down of SIA clusters by Mn and Ni can be justified in terms of the interaction between these atoms and crowdions in Fe. The results of the model compare quantitatively well with post-irradiation isochronal annealing experimental data, providing clear insight into the mechanisms that determine the disappearance or re-arrangement of defects as functions of annealing time and temperature. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Adaption of core simulations to detector readings
International Nuclear Information System (INIS)
Lindahl, S.Oe.
1985-05-01
The shortcomings of the conventional core supervision methods are briefly discussed. A new strategy for core surveillance is proposed The strategy is based on a combination of analytical evaluation of detailed core power and adaption of these to detector measurements. The adaption is carried out 1) each time the simulator is executed by use of averaged detector readings and 2) once a year (approximately) in which case the coefficients of the simulator's equations are overviewed. In the yearly overview, calculations are tuned to measurements (TIP, γ-scannings, k-eff) by parameter optimization or by inversion of the diffusion equation. The proposed strategy is believed to increase the accuracy of the core surveillance, to yield improved thermal margins, to increase the accuracy of core predictions and design calculations, and to lessen the dependence of core surveillance on the detector equipment. (author)
Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing
Directory of Open Access Journals (Sweden)
Khanh Doan V.K.
2014-06-01
Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.
Chen Nian; Li, Ge
2004-01-01
Undulator field errors influence the electron beam trajectories and lower the radiation quality. Angular deflection of electron beam is determined by first field integral, orbital displacement of electron beam is determined by second field integral and radiation quality can be evaluated by rms field error or phase error. Appropriate ordering of magnets can greatly reduce the errors. We apply a modified simulated annealing algorithm to this multi-objective optimization problem, taking first field integral, second field integral and rms field error as objective functions. Undulator with small field errors can be designed by this method within a reasonable calculation time even for the case of hundreds of magnets (first field integral reduced to 10-6T·m, second integral to 10-6T·m2 and rms field error to 0.01%). Thus, the field correction after assembling of undulator will be greatly simplified. This paper gives the optimizing process in detail and puts forward a new method to quickly calculate the rms field e...
Directory of Open Access Journals (Sweden)
Ümmühan Başaran Filik
2010-01-01
Full Text Available This paper presents the solving unit commitment (UC problem using Modified Subgradient Method (MSG method combined with Simulated Annealing (SA algorithm. UC problem is one of the important power system engineering hard-solving problems. The Lagrangian relaxation (LR based methods are commonly used to solve the UC problem. The main disadvantage of this group of methods is the difference between the dual and the primal solution which gives some significant problems on the quality of the feasible solution. In this paper, MSG method which does not require any convexity and differentiability assumptions is used for solving the UC problem. MSG method depending on the initial value reaches zero duality gap. SA algorithm is used in order to assign the appropriate initial value for MSG method. The major advantage of the proposed approach is that it guarantees the zero duality gap independently from the size of the problem. In order to show the advantages of this proposed approach, the four-unit Tuncbilek thermal plant and ten-unit thermal plant which is usually used in literature are chosen as test systems. Penalty function (PF method is also used to compare with our proposed method in terms of total cost and UC schedule.
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
Directory of Open Access Journals (Sweden)
Jin Qin
2015-01-01
Full Text Available A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others.
An interactive system for creating object models from range data based on simulated annealing
International Nuclear Information System (INIS)
Hoff, W.A.; Hood, F.W.; King, R.H.
1997-01-01
In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy
Simulated Annealing-Based Ant Colony Algorithm for Tugboat Scheduling Optimization
Directory of Open Access Journals (Sweden)
Qi Xu
2012-01-01
Full Text Available As the “first service station” for ships in the whole port logistics system, the tugboat operation system is one of the most important systems in port logistics. This paper formulated the tugboat scheduling problem as a multiprocessor task scheduling problem (MTSP after analyzing the characteristics of tugboat operation. The model considers factors of multianchorage bases, different operation modes, and three stages of operations (berthing/shifting-berth/unberthing. The objective is to minimize the total operation times for all tugboats in a port. A hybrid simulated annealing-based ant colony algorithm is proposed to solve the addressed problem. By the numerical experiments without the shifting-berth operation, the effectiveness was verified, and the fact that more effective sailing may be possible if tugboats return to the anchorage base timely was pointed out; by the experiments with the shifting-berth operation, one can see that the objective is most sensitive to the proportion of the shifting-berth operation, influenced slightly by the tugboat deployment scheme, and not sensitive to the handling operation times.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
International Nuclear Information System (INIS)
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-01-01
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Automated integration of genomic physical mapping data via parallel simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Slezak, T.
1994-06-01
The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.
WEAR PERFORMANCE OPTIMIZATION OF SILICON NITRIDE USING GENETIC AND SIMULATED ANNEALING ALGORITHM
Directory of Open Access Journals (Sweden)
SACHIN GHALME
2017-12-01
Full Text Available Replacing damaged joint with the suitable alternative material is a prime requirement in a patient who has arthritis. Generation of wear particles in the artificial joint during action or movement is a serious issue and leads to aseptic loosening of joint. Research in the field of bio-tribology is trying to evaluate materials with minimum wear volume loss so as to extend joint life. Silicon nitride (Si3N4 is non-oxide ceramic suggested as a new alternative for hip/knee joint replacement. Hexagonal Boron Nitride (hBN is recommended as a solid additive lubricant to improve the wear performance of Si3N4 . In this paper, an attempt has been made to evaluate the optimum combination of load and % volume of hBN in Si3N4 to minimize wear volume loss (WVL. The experiments were conducted according to Design of Experiments (DoE – Taguchi method and a mathematical model is developed. Further, this model is processed with Genetic Algorithm (GA and Simulated Annealing (SA to find out the optimum percentage of hBN in Si3N4 to minimize wear volume loss against Alumina (Al2O3 counterface. Taguchi method presents 15 N load and 8% volume of hBN to minimize WVL of Si3N4 . While GA and SA optimization offer 11.08 N load, 12.115% volume of hBN and 11.0789 N load, 12.128% volume of hBN respectively to minimize WVL in Si3N4. .
Tsang, Herbert H; Wiese, Kay C
2010-01-01
Ribonucleic acid (RNA), a single-stranded linear molecule, is essential to all biological systems. Different regions of the same RNA strand will fold together via base pair interactions to make intricate secondary and tertiary structures that guide crucial homeostatic processes in living organisms. Since the structure of RNA molecules is the key to their function, algorithms for the prediction of RNA structure are of great value. In this article, we demonstrate the usefulness of SARNA-Predict, an RNA secondary structure prediction algorithm based on Simulated Annealing (SA). A performance evaluation of SARNA-Predict in terms of prediction accuracy is made via comparison with eight state-of-the-art RNA prediction algorithms: mfold, Pseudoknot (pknotsRE), NUPACK, pknotsRG-mfe, Sfold, HotKnots, ILM, and STAR. These algorithms are from three different classes: heuristic, dynamic programming, and statistical sampling techniques. An evaluation for the performance of SARNA-Predict in terms of prediction accuracy was verified with native structures. Experiments on 33 individual known structures from eleven RNA classes (tRNA, viral RNA, antigenomic HDV, telomerase RNA, tmRNA, rRNA, RNaseP, 5S rRNA, Group I intron 23S rRNA, Group I intron 16S rRNA, and 16S rRNA) were performed. The results presented in this paper demonstrate that SARNA-Predict can out-perform other state-of-the-art algorithms in terms of prediction accuracy. Furthermore, there is substantial improvement of prediction accuracy by incorporating a more sophisticated thermodynamic model (efn2).
Su, Hongsheng
2017-12-18
Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.
Optimization of a hydrometric network extension using specific flow, kriging and simulated annealing
Chebbi, Afef; Kebaili Bargaoui, Zoubeida; Abid, Nesrine; da Conceição Cunha, Maria
2017-12-01
In hydrometric stations, water levels are continuously observed and discharge rating curves are constantly updated to achieve accurate river levels and discharge observations. An adequate spatial distribution of hydrological gauging stations presents a lot of interest in linkage with the river regime characterization, water infrastructures design, water resources management and ecological survey. Due to the increase of riverside population and the associated flood risk, hydrological networks constantly need to be developed. This paper suggests taking advantage of kriging approaches to improve the design of a hydrometric network. The context deals with the application of an optimization approach using ordinary kriging and simulated annealing (SA) in order to identify the best locations to install new hydrometric gauges. The task at hand is to extend an existing hydrometric network in order to estimate, at ungauged sites, the average specific annual discharge which is a key basin descriptor. This methodology is developed for the hydrometric network of the transboundary Medjerda River in the North of Tunisia. A Geographic Information System (GIS) is adopted to delineate basin limits and centroids. The latter are adopted to assign the location of basins in kriging development. Scenarios where the size of an existing 12 stations network is alternatively increased by 1, 2, 3, 4 and 5 new station(s) are investigated using geo-regression and minimization of the variance of kriging errors. The analysis of the optimized locations from a scenario to another shows a perfect conformity with respect to the location of the new sites. The new locations insure a better spatial coverage of the study area as seen with the increase of both the average and the maximum of inter-station distances after optimization. The optimization procedure selects the basins that insure the shifting of the mean drainage area towards higher specific discharges.
Lee, Cheng-Kuang
2014-12-10
© 2014 American Chemical Society. The nanomorphologies of the bulk heterojunction (BHJ) layer of polymer solar cells are extremely sensitive to the electrode materials and thermal annealing conditions. In this work, the correlations of electrode materials, thermal annealing sequences, and resultant BHJ nanomorphological details of P3HT:PCBM BHJ polymer solar cell are studied by a series of large-scale, coarse-grained (CG) molecular simulations of system comprised of PEDOT:PSS/P3HT:PCBM/Al layers. Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link between morphology and processing conditions. Our analysis indicates that vertical phase segregation of P3HT:PCBM blend strongly depends on the electrode material and thermal annealing schedule. A thin P3HT-rich film is formed on the top, regardless of bottom electrode material, when the BHJ layer is exposed to the free surface during thermal annealing. In addition, preferential segregation of P3HT chains and PCBM molecules toward PEDOT:PSS and Al electrodes, respectively, is observed. Detailed morphology analysis indicated that, surprisingly, vertical phase segregation does not affect the connectivity of donor/acceptor domains with respective electrodes. However, the formation of P3HT/PCBM depletion zones next to the P3HT/PCBM-rich zones can be a potential bottleneck for electron/hole transport due to increase in transport pathway length. Analysis in terms of fraction of intra- and interchain charge transports revealed that processing schedule affects the average vertical orientation of polymer chains, which may be crucial for enhanced charge transport, nongeminate recombination, and charge collection. The present study establishes a more detailed link between processing and morphology by combining multiscale molecular
Transitional annealed adaptive slice sampling for Gaussian process hyper-parameter estimation
Garbuno-Inigo, A.; DiazDelaO, F. A.; Zuev, K. M.
2015-01-01
Surrogate models have become ubiquitous in science and engineering for their capability of emulating expensive computer codes, necessary to model and investigate complex phenomena. Bayesian emulators based on Gaussian processes adequately quantify the uncertainty that results from the cost of the original simulator, and thus the inability to evaluate it on the whole input space. However, it is common in the literature that only a partial Bayesian analysis is carried out, whereby the underlyin...
International Nuclear Information System (INIS)
Muroga, Takeo
1990-01-01
The free defect survival ratio is calculated by ''cascade-annealing'' computer simulation using the MARLOWE and modified DAIQUIRI codes in various cases of Primary Knock-on Atom (PKA) spectra. The number of subcascades is calculated by ''cut-off'' calculation using MARLOWE. The adequacy of these methods is checked by comparing the results with experiments (surface segregation measurements and Transmission Electron Microscope cascade defect observations). The correlation using the weighted average recoil energy as a parameter shows that the saturation of the free defect survival ratio at high PKA energies has a close relation to the cascade splitting into subcascades. (author)
Directory of Open Access Journals (Sweden)
Mahdi Sadeghzadeh
2014-02-01
Full Text Available Genetic Algorithm is an algorithm based on population and many optimization problems are solved with this method, successfully. With increasing demand for computer attacks, security, efficient and reliable Internet has increased. Cryptographic systems have studied the science of communication is hidden, and includes two case categories including encryption, password and analysis. In this paper, several code analyses based on genetic algorithms, tabu search and simulated annealing for a permutation of encrypted text are investigated. The study also attempts to provide and to compare the performance in terms of the amount of check and control algorithms and the results are compared.
International Nuclear Information System (INIS)
Kisdarjono, Hidayat; Voutsas, Apostolos T.; Solanki, Raj
2003-01-01
A model has been developed for the rapid melting and resolidification of thin Si films induced by excimer-laser annealing. The key feature of this model is its ability to simulate lateral growth and random nucleation. The first component of the model is a set of rules for phase change. The second component is a set of functions for computing the latent heat and the displacement of the solid-liquid interface resulting from the phase change. The third component is an algorithm that allows for random nucleation based on classical nucleation theory. Consequently, the model enables the prediction of lateral growth length (LGL), as well as the calculation of other critical responses of the quenched film such as solid-liquid interface velocity and undercooling. Thin amorphous Si films with thickness of 30, 50, and 100 nm were annealed under various laser fluences to completely melt the films. The resulting LGL were measured using a scanning electron microscope. Using physical parameters that were consistent with previous studies, the simulated LGL values agree well with the experimental results over a wide range of irradiation conditions. Sensitivity analysis was done to demonstrate the behavior of the model with respect to a select number of model parameters. Our simulations suggest that, for a given fluence, controlling the film's quenching rate is essential for increasing LGL. To this end, the model is an invaluable tool for evaluating and choosing irradiation strategies for increasing lateral growth in laser-crystallized silicon films
Directory of Open Access Journals (Sweden)
Momeni Dehaghi, I.
2018-01-01
Full Text Available Habitat degradation and hunting are among the most important causes of population decline for Alectoris chukar and Phasianus colchicus, two of the most threatened game species in the Golestan Province of Iran. Limited data on distribution and location of high–quality habitats for the two species make conservation efforts more difficult in the province. We used multi–criteria evaluation (MCE as a coarse–filter approach to refine the general distribution areas into habitat suitability maps for the species. We then used these maps as input to simulated annealing as a heuristic algorithm through Marxan in order to prioritize areas for conservation of the two species. To find the optimal solution, we tested various boundary length modifier (BLM values in the simulated annealing process. Our results showed that the MCE approach was useful to refine general habitat maps. Assessment of the selected reserves confirmed the suitability of the selected areas (mainly neighboring the current reserves making their management easier and more feasible. The total area of the selected reserves was about 476 km2. As current reserves of the Golestan Province represent only 23 % of the optimal area, further protected areas should be considered to efficiently conserve these two species.
Nemirsky, Kristofer Kevin
In this thesis, the history and evolution of rotor aircraft with simulated annealing-based PID application were reviewed and quadcopter dynamics are presented. The dynamics of a quadcopter were then modeled, analyzed, and linearized. A cascaded loop architecture with PID controllers was used to stabilize the plant dynamics, which was improved upon through the application of simulated annealing (SA). A Simulink model was developed to test the controllers and verify the functionality of the proposed control system design. In addition, the data that the Simulink model provided were compared with flight data to present the validity of derived dynamics as a proper mathematical model representing the true dynamics of the quadcopter system. Then, the SA-based global optimization procedure was applied to obtain optimized PID parameters. It was observed that the tuned gains through the SA algorithm produced a better performing PID controller than the original manually tuned one. Next, we investigated the uncertain dynamics of the quadcopter setup. After adding uncertainty to the gyroscopic effects associated with pitch-and-roll rate dynamics, the controllers were shown to be robust against the added uncertainty. A discussion follows to summarize SA-based algorithm PID controller design and performance outcomes. Lastly, future work on SA application on multi-input-multi-output (MIMO) systems is briefly discussed.
Nayak, Nimain Charan; Rajan, C. Christober Asir
2010-10-01
This Paper proposes a new hybrid algorithm for solving the Unit Commitment problem in Hydrothermal Power System using a hybrid Evolutionary Programming—Simulated Annealing method. The main objective of this project is to find the generation scheduling by committing the generating units such that the total operating cost can be minimized by satisfying both the forecasted load demand and various operating constraints of the generating units. It is a Global Optimization technique for solving Unit Commitment Problem, operates on a system, which is designed to encode each unit's operating schedule with regard to its minimum up/down time. In this method, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here the parents are obtained from a predefined set of solutions i.e. each and every solution is adjusted to meet the requirements. Then, random recommitment is carried out with respect to the unit's minimum down times. Simulated Annealing (SA) is a powerful optimization procedure that has been successfully applied to a number of combinatorial optimization problems. It avoids entrapment at local optimum by maintaining a short term memory of recently obtained solutions. Numerical results are shown comparing the cost solutions and computation time obtained by using the proposed hybrid method than conventional methods like Dynamic Programming, Lagrangian Relaxation.
International Nuclear Information System (INIS)
Gomes, Mario Helder; Saraiva, Joao Tome
2009-01-01
This paper describes an optimization model to be used by System Operators in order to validate the economic schedules obtained by Market Operators together with the injections from Bilateral Contracts. These studies will be performed off-line in the day before operation and the developed model is based on adjustment bids submitted by generators and loads and it is used by System Operators if that is necessary to enforce technical or security constraints. This model corresponds to an enhancement of an approach described in a previous paper and it now includes discrete components as transformer taps and reactor and capacitor banks. The resulting mixed integer formulation is solved using Simulated Annealing, a well known metaheuristic specially suited for combinatorial problems. Once the Simulated Annealing converges and the values of the discrete variables are fixed, the resulting non-linear continuous problem is solved using Sequential Linear Programming to get the final solution. The developed model corresponds to an AC version, it includes constraints related with the capability diagram of synchronous generators and variables allowing the computation of the active power required to balance active losses. Finally, the paper includes a Case Study based on the IEEE 118 bus system to illustrate the results that it is possible to obtain and their interest. (author)
Energy Technology Data Exchange (ETDEWEB)
Gomes, Mario Helder [Departamento de Engenharia Electrotecnica, Instituto Politecnico de Tomar, Quinta do Contador, Estrada da Serra, 2300 Tomar (Portugal); Saraiva, Joao Tome [INESC Porto, Faculdade de Engenharia, Universidade do Porto, Campus da FEUP, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)
2009-06-15
This paper describes an optimization model to be used by System Operators in order to validate the economic schedules obtained by Market Operators together with the injections from Bilateral Contracts. These studies will be performed off-line in the day before operation and the developed model is based on adjustment bids submitted by generators and loads and it is used by System Operators if that is necessary to enforce technical or security constraints. This model corresponds to an enhancement of an approach described in a previous paper and it now includes discrete components as transformer taps and reactor and capacitor banks. The resulting mixed integer formulation is solved using Simulated Annealing, a well known metaheuristic specially suited for combinatorial problems. Once the Simulated Annealing converges and the values of the discrete variables are fixed, the resulting non-linear continuous problem is solved using Sequential Linear Programming to get the final solution. The developed model corresponds to an AC version, it includes constraints related with the capability diagram of synchronous generators and variables allowing the computation of the active power required to balance active losses. Finally, the paper includes a Case Study based on the IEEE 118 bus system to illustrate the results that it is possible to obtain and their interest. (author)
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-01-01
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement with measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem
Energy Technology Data Exchange (ETDEWEB)
Nakos, J.T.; Rosinski, S.T.; Acton, R.U.
1994-11-01
The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.
Simulation of Defect Reduction in Block Copolymer Thin Films by Solvent Annealing
Energy Technology Data Exchange (ETDEWEB)
Hur, Su-Mi; Khaira, Gurdaman S.; Ramírez-Hernández, Abelardo; Müller, Marcus; Nealey, Paul F.; de Pablo, Juan J.
2015-01-20
Solvent annealing provides an effective means to control the self-assembly of block copolymer (BCP) thin films. Multiple effects, including swelling, shrinkage, and morphological transitions, act in concert to yield ordered or disordered structures. The current understanding of these processes is limited; by relying on a theoretically informed coarse-grained model of block copolymers, a conceptual framework is presented that permits prediction and rationalization of experimentally observed behaviors. Through proper selection of several process conditions, it is shown that a narrow window of solvent pressures exists over which one can direct a BCP material to form well-ordered, defect-free structures.
International Nuclear Information System (INIS)
Miellou, J.C.; Igli, H.; Grivet, M.; Rebetez, M.; Chambaudet, A.
1994-01-01
In minerals, the uranium fission tracks are sensitive to temperature and time. The consequence is that the etchable lengths are reduced. To simulate the phenomenon, at the last International Conference on Nuclear Tracks in solids at Beijing in 1992, we proposed a convection model for fission track annealing based on a reaction situation associated with only one activation energy. Moreover a simple inverse method based on the resolution of an ordinary differential equation was described, making it possible to retrace the thermal history in this mono-exponential situation. The aim of this paper is to consider a more involved class of models including multi-exponentials associated with several activation energies. We shall describe in this framework the modelling of the direct phenomenon and the resolution of the inverse problem. Results of numerical simulations and comparison with the mono-exponential case will be presented. 5 refs. (author)
Directory of Open Access Journals (Sweden)
Doddy Kastanya
2017-02-01
Full Text Available In any reactor physics analysis, the instantaneous power distribution in the core can be calculated when the actual bundle-wise burnup distribution is known. Considering the fact that CANDU (Canada Deuterium Uranium utilizes on-power refueling to compensate for the reduction of reactivity due to fuel burnup, in the CANDU fuel management analysis, snapshots of power and burnup distributions can be obtained by simulating and tracking the reactor operation over an extended period using various tools such as the *SIMULATE module of the Reactor Fueling Simulation Program (RFSP code. However, for some studies, such as an evaluation of a conceptual design of a next-generation CANDU reactor, the preferred approach to obtain a snapshot of the power distribution in the core is based on the patterned-channel-age model implemented in the *INSTANTAN module of the RFSP code. The objective of this approach is to obtain a representative snapshot of core conditions quickly. At present, such patterns could be generated by using a program called RANDIS, which is implemented within the *INSTANTAN module. In this work, we present an alternative approach to derive the patterned-channel-age model where a simulated-annealing-based algorithm is used to find such patterns, which produce reasonable power distributions.
International Nuclear Information System (INIS)
Engrand, P.
1998-01-01
As far as stochastic optimization methods are concerned, Simulated Annealing (SA) and Genetic Algorithms (GA) have been successfully applied to fuel management, when using a single objective function. Recent work has shown that it is possible to use a true multi-objective approach (e.g. fresh fuel enrichment minimization and cycle length maximization,...) based on GA. In that approach, ranking the individuals of the population is based on the non-dominance principle. It is shown that a similar approach can be applied to SA, which is traditionally single objective. In this approach, every time a solution using is accepted, it is compared to other archived solutions using the non-dominance principle. At the end of the optimization search, one ends up with an archived population which actually represents the trade-off surface between all the objective functions of interest, among which the expert will then choose the best solution according to his priorities. (author)
Directory of Open Access Journals (Sweden)
Shangchia Liu
2015-01-01
Full Text Available In the field of distributed decision making, different agents share a common processing resource, and each agent wants to minimize a cost function depending on its jobs only. These issues arise in different application contexts, including real-time systems, integrated service networks, industrial districts, and telecommunication systems. Motivated by its importance on practical applications, we consider two-agent scheduling on a single machine where the objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent. For solving the proposed problem, a branch-and-bound and three simulated annealing algorithms are developed for the optimal solution, respectively. In addition, the extensive computational experiments are also conducted to test the performance of the algorithms.
DEFF Research Database (Denmark)
Sousa, Tiago; Vale, Zita; Carvalho, Joao Paulo
2014-01-01
The massification of electric vehicles (EVs) can have a significant impact on the power system, requiring a new approach for the energy resource management. The energy resource management has the objective to obtain the optimal scheduling of the available resources considering distributed...... to determine the best solution in a reasonable amount of time. This paper presents a hybrid artificial intelligence technique to solve a complex energy resource management problem with a large number of resources, including EVs, connected to the electric network. The hybrid approach combines simulated...... annealing (SA) and ant colony optimization (ACO) techniques. The case study concerns different EVs penetration levels. Comparisons with a previous SA approach and a deterministic technique are also presented. For 2000 EVs scenario, the proposed hybrid approach found a solution better than the previous SA...
PASSATA - Object oriented numerical simulation software for adaptive optics
Agapito, G.; Puglisi, A.; Esposito, S.
2016-01-01
We present the last version of the PyrAmid Simulator Software for Adaptive opTics Arcetri (PASSATA), an IDL and CUDA based object oriented software developed in the Adaptive Optics group of the Arcetri observatory for Monte-Carlo end-to-end adaptive optics simulations. The original aim of this software was to evaluate the performance of a single conjugate adaptive optics system for ground based telescope with a pyramid wavefront sensor. After some years of development, the current version of ...
Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique
2016-12-01
Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner-Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.
International Nuclear Information System (INIS)
Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique
2016-01-01
Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner–Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.
Li, Yang; Li, JiaHao; Liu, BaiXin
2015-10-28
Nucleation is one of the most essential transformation paths in phase transition and exerts a significant influence on the crystallization process. Molecular dynamics simulations were performed to investigate the atomic-scale nucleation mechanisms of NiTi metallic glasses upon devitrification at various temperatures (700 K, 750 K, 800 K, and 850 K). Our simulations reveal that at 700 K and 750 K, nucleation is polynuclear with high nucleation density, while at 800 K it is mononuclear. The underlying nucleation mechanisms have been clarified, manifesting that nucleation can be induced either by the initial ordered clusters (IOCs) or by the other precursors of nuclei evolved directly from the supercooled liquid. IOCs and other precursors stem from the thermal fluctuations of bond orientational order in supercooled liquids during the quenching process and during the annealing process, respectively. The simulation results not only elucidate the underlying nucleation mechanisms varied with temperature, but also unveil the origin of nucleation. These discoveries offer new insights into the devitrification mechanism of metallic glasses.
Using the adaptive blockset for simulation and rapid prototyping
DEFF Research Database (Denmark)
Ravn, Ole
1999-01-01
is outlined. The block types are, identification, controller design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper presents the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The basics of indirect adaptive controllers are summarized. The concept behind the Adaptive Blockset for Simulink is to bridge...... the gap between simulation and prototype controller implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally fund in adaptive controllers...
International Nuclear Information System (INIS)
Tran Ngoc Ha; Pham Thi Hong Ha
2003-01-01
In the present work, neutral network has been used for mathematically modeling equilibrium data of the mixture of two rare earth elements, namely Nd and Pr with PC88A agent. Thermo-genetic algorithm based on the idea of the genetic algorithm and the simulated annealing algorithm have been used in the training procedure of the neutral networks, giving better result in comparison with the traditional modeling approach. The obtained neutral network modeling the experimental data is further used in the computer program to simulate the solvent extraction process of two elements Nd and Pr. Based on this computer program, various optional schemes for the separation of Nd and Pr have been investigated and proposed. (author)
The Durham Adaptive Optics Simulation Platform (DASP): Current status
Basden, A. G.; Bharmal, N. A.; Jenkins, D.; Morris, T. J.; Osborn, J.; Peng, J.; Staykov, L.
2018-01-01
The Durham Adaptive Optics Simulation Platform (DASP) is a Monte-Carlo modelling tool used for the simulation of astronomical and solar adaptive optics systems. In recent years, this tool has been used to predict the expected performance of the forthcoming extremely large telescope adaptive optics systems, and has seen the addition of several modules with new features, including Fresnel optics propagation and extended object wavefront sensing. Here, we provide an overview of the features of DASP and the situations in which it can be used. Additionally, the user tools for configuration and control are described.
DEFF Research Database (Denmark)
Ravn, Ole
1998-01-01
The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller implement...... design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...
ADAPTIVE QUASICONTINUUM SIMULATION OF ELASTIC-BRITTLE DISORDERED LATTICES
Directory of Open Access Journals (Sweden)
Karel Mikeš
2017-11-01
Full Text Available The quasicontinuum (QC method is a computational technique that can efficiently handle atomistic lattices by combining continuum and atomistic approaches. In this work, the QC method is combined with an adaptive algorithm, to obtain correct predictions of crack trajectories in failure simulations. Numerical simulations of crack propagation in elastic-brittle disordered lattices are performed for a two-dimensional example. The obtained results are compared with the fully resolved particle model. It is shown that the adaptive QC simulation provides a significant reduction of the computational demand. At the same time, the macroscopic crack trajectories and the shape of the force-displacement diagram are very well captured.
An adaptive simulation tool for evacuation scenarios
Formolo, Daniel; van der Wal, C. Natalie
2017-01-01
Building useful and efficient models and tools for a varied audience, such as evacuation simulators for scientists, engineers and crisis managers, can be tricky. Even good models can fail in providing information when the user’s tools for the model are scarce of resources. The aim of this work is to
View-Dependent Adaptive Cloth Simulation with Buckling Compensation.
Koh, Woojong; Narain, Rahul; O'Brien, James F
2015-10-01
This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening, while locking in extremely coarsened regions is inhibited by modifying the material model to compensate for unresolved sub-element buckling. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows. The approach produces a 2× speed-up for a single character and more than 4× for a small group as compared to view-independent adaptive simulations, and respectively 5× and 9× speed-ups as compared to non-adaptive simulations.
Simulation of Adaptive Kinetic Architectural Structures
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
2010-01-01
to obtain the control forces that must be known in order to control the shape of a real variable geometry truss structure. An experimental test of the of the shape control approach has been implemented using VGT truss structure and a low-cost data acquisition system based on the open-source Arduino......This project deals with shape control of kinetic structures within the field of adaptable architecture. Here a variable geometry truss cantilever structure is analyzed using MATLAB/SIMULINK and the multibody dynamic software MSC Adams. Active shape control of a structure requires that the kinematic...
Directory of Open Access Journals (Sweden)
Felipe Baesler
2008-12-01
Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.
GPU accelerated population annealing algorithm
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature
Adaptive LES Methodology for Turbulent Flow Simulations
Energy Technology Data Exchange (ETDEWEB)
Oleg V. Vasilyev
2008-06-12
Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Chen, Nan; Lee, J. Jack
2013-01-01
Simon’s two-stage design is commonly used in phase II single-arm clinical trials because of its simplicity and smaller sample size under the null hypothesis compared to the one-stage design. Some studies extend this design to accommodate more interim analyses (i.e., three-stage or four-stage designs). However, most of these studies, together with the original Simon’s two-stage design, are based on the exhaustive search method, which is difficult to extend to high-dimensional, general multi-stage designs. In this study, we propose a simulated annealing (SA)-based design to optimize the early stopping boundaries and minimize the expected sample size for multi-stage or continuous monitoring single-arm trials. We compare the results of the SA method, the decision-theoretic method, the predictive probability method, and the posterior probability method. The SA method can reach the smallest expected sample sizes in all scenarios under the constraints of the same type I and type II errors. The expected sample sizes from the SA method are generally 10–20% smaller than those from the posterior probability method or the predictive probability method, and are slightly smaller than those from the decision-theoretic method in almost all scenarios. The SA method offers an excellent alternative in designing phase II trials with continuous monitoring. PMID:23545075
Numerical Simulation of Solidification Microstructure based on Adaptive Octree Grids
Directory of Open Access Journals (Sweden)
Yin Y.
2016-06-01
Full Text Available The main work of this paper focuses on the simulation of binary alloy solidification using the phase field model and adaptive octree grids. Ni-Cu binary alloy is used as an example in this paper to do research on the numerical simulation of isothermal solidification of binary alloy. Firstly, the WBM model, numerical issues and adaptive octree grids have been explained. Secondary, the numerical simulation results of three dimensional morphology of the equiaxed grain and concentration variations are given, taking the efficiency advantage of the adaptive octree grids. The microsegregation of binary alloy has been analysed emphatically. Then, numerical simulation results of the influence of thermophysical parameters on the growth of the equiaxed grain are also given. At last, a simulation experiment of large scale and long-time has been carried out. It is found that increases of initial temperature and initial concentration will make grain grow along certain directions and adaptive octree grids can effectively be used in simulations of microstructure.
Mental simulation of routes during navigation involves adaptive temporal compression.
Arnold, Aiden E G F; Iaria, Giuseppe; Ekstrom, Arne D
2016-12-01
Mental simulation is a hallmark feature of human cognition, allowing features from memories to be flexibly used during prospection. While past studies demonstrate the preservation of real-world features such as size and distance during mental simulation, their temporal dynamics remains unknown. Here, we compare mental simulations to navigation of routes in a large-scale spatial environment to test the hypothesis that such simulations are temporally compressed in an adaptive manner. Our results show that simulations occurred at 2.39× the speed it took to navigate a route, increasing in compression (3.57×) for slower movement speeds. Participant self-reports of vividness and spatial coherence of simulations also correlated strongly with simulation duration, providing an important link between subjective experiences of simulated events and how spatial representations are combined during prospection. These findings suggest that simulation of spatial events involve adaptive temporal mechanisms, mediated partly by the fidelity of memories used to generate the simulation. Copyright © 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Young, J.M.; Scovell, P.D.
1982-01-01
A process for annealing crystal damage in ion implanted semiconductor devices in which the device is rapidly heated to a temperature between 450 and 900 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. (author)
A Bacterial-Based Algorithm to Simulate Complex Adaptative Systems
González Rodríguez, Diego; Hernández Carrión, José Rodolfo
2014-01-01
Paper presented at the 13th International Conference on Simulation of Adaptive Behavior which took place at Castellón, Spain in 2014, July 22-25. Bacteria have demonstrated an amazing capacity to overcome envi-ronmental changes by collective adaptation through genetic exchanges. Using a distributed communication system and sharing individual strategies, bacteria propagate mutations as innovations that allow them to survive in different envi-ronments. In this paper we present an agent-based...
Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu
2013-07-22
To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Directory of Open Access Journals (Sweden)
Larry W. Burggraf
2013-07-01
Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Simulated Annealing Based Algorithm for Identifying Mutated Driver Pathways in Cancer
Directory of Open Access Journals (Sweden)
Hai-Tao Li
2014-01-01
Full Text Available With the development of next-generation DNA sequencing technologies, large-scale cancer genomics projects can be implemented to help researchers to identify driver genes, driver mutations, and driver pathways, which promote cancer proliferation in large numbers of cancer patients. Hence, one of the remaining challenges is to distinguish functional mutations vital for cancer development, and filter out the unfunctional and random “passenger mutations.” In this study, we introduce a modified method to solve the so-called maximum weight submatrix problem which is used to identify mutated driver pathways in cancer. The problem is based on two combinatorial properties, that is, coverage and exclusivity. Particularly, we enhance an integrative model which combines gene mutation and expression data. The experimental results on simulated data show that, compared with the other methods, our method is more efficient. Finally, we apply the proposed method on two real biological datasets. The results show that our proposed method is also applicable in real practice.
SimulCAT: Windows Software for Simulating Computerized Adaptive Test Administration
Han, Kyung T.
2012-01-01
Most, if not all, computerized adaptive testing (CAT) programs use simulation techniques to develop and evaluate CAT program administration and operations, but such simulation tools are rarely available to the public. Up to now, several software tools have been available to conduct CAT simulations for research purposes; however, these existing…
SIMULATION OF PULSED BREAKDOWN IN HELIUM BY ADAPTIVE METHODS
Directory of Open Access Journals (Sweden)
S. I. Eliseev
2014-09-01
Full Text Available The paper deals with the processes occurring during electrical breakdown in gases as well as numerical simulation of these processes using adaptive mesh refinement methods. Discharge between needle electrodes in helium at atmospheric pressure is selected for the test simulation. Physical model of the accompanying breakdown processes is based on self- consistent system of continuity equations for streams of charged particles (electrons and positive ions and Poisson equation for electric potential. Sharp plasma heterogeneity in the area of streamers requires the usage of adaptive algorithms for constructing of computational grids for modeling. The method for grid adaptive construction together with justification of its effectiveness for significantly unsteady gas breakdown simulation at atmospheric pressure is described. Upgraded version of Gerris package is used for numerical simulation of electrical gas breakdown. Software package, originally focused on solution of nonlinear problems in fluid dynamics, appears to be suitable for processes modeling in non-stationary plasma described by continuity equations. The usage of adaptive grids makes it possible to get an adequate numerical model for the breakdown development in the system of needle electrodes. Breakdown dynamics is illustrated by contour plots of electron densities and electric field intensity obtained in the course of solving. Breakdown mechanism of positive and negative (orientated to anode streamers formation is demonstrated and analyzed. Correspondence between adaptive building of computational grid and generated plasma gradients is shown. Obtained results can be used as a basis for full-scale numerical experiments on electric breakdown in gases.
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
A viscosity adaption method for Lattice Boltzmann simulations
Conrad, Daniel; Schneider, Andreas; Böhle, Martin
2014-11-01
In this work, we consider the limited fitness for practical use of the Lattice Boltzmann Method for non-Newtonian fluid flows. Several authors have shown that the LBM is capable of correctly simulating those fluids. However, due to stability reasons the modeled viscosity range has to be truncated. The resulting viscosity boundaries are chosen arbitrarily, because the correct simulation Mach number for the physical problem is unknown a priori. This easily leads to corrupt simulation results. A viscosity adaption method (VAM) is derived which drastically improves the applicability of LBM for non-Newtonian fluid flows by adaption of the modeled viscosity range to the actual physical problem. This is done through tuning of the global Mach number to the solution-dependent shear rate. We demonstrate that the VAM can be used to accelerate LBM simulations and improve their accuracy, for both steady state and transient cases.
The behavior of adaptive bone-remodeling simulation models
H.H. Weinans (Harrie); R. Huiskes (Rik); H.J. Grootenboer
1992-01-01
textabstractThe process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
DEFF Research Database (Denmark)
Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.
2016-01-01
the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...
International Nuclear Information System (INIS)
Young, J.M.; Scovell, P.D.
1981-01-01
A process for annealing crystal damage in ion implanted semiconductor devices is described in which the device is rapidly heated to a temperature between 450 and 600 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. The process may be enhanced by the application of optical radiation from a Xenon lamp. (author)
Simulation for noise cancellation using LMS adaptive filter
Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung
2017-06-01
In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.
Resolution Convergence in Cosmological Hydrodynamical Simulations Using Adaptive Mesh Refinement
Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim
2018-03-01
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of initial conditions is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold back falls with increasing spatial and initial-condition resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of order of 10-20%, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold back on the star formation rate.
Selection for autochthonous bifidobacteial isolates adapted to simulated gastrointestinal fluid
Directory of Open Access Journals (Sweden)
H Jamalifar
2010-03-01
Full Text Available "nBackground and the purpose of the study: Bifidobacterial strains are excessively sensitive to acidic conditions and this can affect their living ability in the stomach and fermented foods, and as a result, restrict their use as live probiotic cultures. The aim of the present study was to obtain bifidobacterial isolates with augmented tolerance to simulated gastrointestinal condition using cross-protection method. "nMethods: Individual bifidobacterial strains were treated in acidic environment and also in media containing bile salts and NaCl. Viability of the acid and acid-bile-NaCl tolerant isolates was further examined in simulated gastric and small intestine by subsequent incubation of the probiotic bacteria in the corresponding media for 120 min. Antipathogenic activities of the adapted isolates were compared with those of the original strains. "nResults and major conclusion: The acid and acid-bile-NaCl adapted isolates showed improved viabilities significantly (p<0.05 in simulated gastric fluid compared to their parent strains. The levels of reduction in bacterial count (Log cfu/ml of the acid and acid-bile-NaCl adapted isolates obtained in simulated gastric fluid ranged from 0.64-3.06 and 0.36-2.43 logarithmic units after 120 min of incubation. There was no significant difference between the viability of the acid-bile-NaCl-tolerant isolates and the original strains in simulated small intestinal condition except for Bifidobacterium adolescentis (p<0.05. The presence of 15 ml of supernatants of acid-bile-NaCl-adapted isolates and also those of the initial Bifidobacterium strains inhibited pathogenic bacterial growth for 24 hrs. Probiotic bacteria with improved ability to survive in harsh gastrointestinal environment could be obtained by subsequent treatment of the strains in acid, bile salts and NaCl environments.
simulate_CAT: A Computer Program for Post-Hoc Simulation for Computerized Adaptive Testing
Directory of Open Access Journals (Sweden)
İlker Kalender
2015-06-01
Full Text Available This paper presents a computer software developed by the author. The software conducts post-hoc simulations for computerized adaptive testing based on real responses of examinees to paper and pencil tests under different parameters that can be defined by user. In this paper, short information is given about post-hoc simulations. After that, the working principle of the software is provided and a sample simulation with required input files is shown. And last, output files are described
A Cooperative Human-Adaptive Traffic Simulation (CHATS)
Phillips, Charles T.; Ballin, Mark G.
1999-01-01
NASA is considering the development of a Cooperative Human-Adaptive Traffic Simulation (CHATS), to examine and evaluate performance of the National Airspace System (NAS) as the aviation community moves toward free flight. CHATS will be specifically oriented toward simulating strategic decision-making by airspace users and by the service provider s traffic management personnel, within the context of different airspace and rules assumptions. It will use human teams to represent these interests and make decisions, and will rely on computer modeling and simulation to calculate the impacts of these decisions. The simulation objectives will be to examine: 1. evolution of airspace users and the service provider s strategies, through adaptation to new operational environments; 2. air carriers competitive and cooperative behavior; 3. expected benefits to airspace users and the service provider as compared to the current NAS; 4. operational limitations of free flight concepts due to congestion and safety concerns. This paper describes an operational concept for CHATS, and presents a high-level functional design which would utilize a combination of existing and new models and simulation capabilities.
WCDMA Mobile Radio Network Simulator with Hybrid Link Adaptation
Directory of Open Access Journals (Sweden)
Vladimir Wieser
2005-01-01
Full Text Available The main aim of this article is the description of the mobile radio network model, which is used for simulation of authentic conditions in mobile radio network and supports several link adaptation algorithms. Algorithms were designed to increase efficiency of data transmission between user equipment and base station (uplink. The most important property of the model is its ability to simulate several radio cells (base stations and their mutual interactions. The model is created on the basic principles of UMTS network and takes into account parameters of real mobile radio networks.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
Sharma, Gulshan B.; Robertson, Douglas D.
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
Energy Technology Data Exchange (ETDEWEB)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States); University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4 (Canada); Robertson, Douglas D., E-mail: douglas.d.robertson@emory.edu [Emory University, Department of Radiology and Imaging Sciences, Spine and Orthopaedic Center, Atlanta, Georgia 30329 (United States); University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213 (United States)
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than
An adaptive nonlinear solution scheme for reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Directory of Open Access Journals (Sweden)
Flávio Lopes Rodrigues
2004-04-01
Full Text Available Os objetivos deste trabalho foram desenvolver e testar a metaheurística SA para solução de problemas de gerenciamento florestal com restrições de integridade. O algoritmo SA desenvolvido foi testado em quatro problemas, contendo entre 93 e 423 variáveis de decisão, sujeitos às restrições de singularidade, produção mínima e produção máxima, periodicamente. Todos os problemas tiveram como objetivo a maximização do valor presente líquido. O algoritmo SA foi codificado em liguagem delphi 5.0 e os testes foram efetuados em um microcomputador AMD K6II 500 MHZ, com memória RAM de 64 MB e disco rígido de 15GB. O desempenho da SA foi avaliado de acordo com as medidas de eficácia e eficiência. Os diferentes valores ou categorias dos parâmetros da SA foram testados e comparados quanto aos seus efeitos na eficácia do algoritmo. A seleção da melhor configuração de parâmetros foi feita com o teste L&O, a 1% de probabilidade, e as análises foram realizadas através de estatísticas descritivas. A melhor configuração de parâmetros propiciou à SA eficácia média de 95,36%, valor mínimo de 83,66%, valor máximo de 100% e coeficiente de variação igual a 3,18% do ótimo matemático obtido pelo algoritmo exato branch and bound. Para o problema de maior porte, a eficiência da SA foi dez vezes superior à eficiência do algoritmo exato branch and bound. O bom desempenho desta heurística reforçou as conclusões, tiradas em outros trabalhos, do seu enorme potencial para resolver importantes problemas de gerenciamento florestal de difícil solução pelos instrumentos computacionais da atualidade.The objectives of this work was to develop and test an algorithm based on Simulated Annealing (SA metaheuristic to solve problems of forest management with integer constraints. The algorithm SA developed was tested in five problems containing between 93 and 423 decision variables, periodically subject to singularity constraints, minimum
Directory of Open Access Journals (Sweden)
Kumar Deepak
2015-12-01
Full Text Available Groundwater contamination due to leakage of gasoline is one of the several causes which affect the groundwater environment by polluting it. In the past few years, In-situ bioremediation has attracted researchers because of its ability to remediate the contaminant at its site with low cost of remediation. This paper proposed the use of a new hybrid algorithm to optimize a multi-objective function which includes the cost of remediation as the first objective and residual contaminant at the end of the remediation period as the second objective. The hybrid algorithm was formed by combining the methods of Differential Evolution, Genetic Algorithms and Simulated Annealing. Support Vector Machines (SVM was used as a virtual simulator for biodegradation of contaminants in the groundwater flow. The results obtained from the hybrid algorithm were compared with Differential Evolution (DE, Non Dominated Sorting Genetic Algorithm (NSGA II and Simulated Annealing (SA. It was found that the proposed hybrid algorithm was capable of providing the best solution. Fuzzy logic was used to find the best compromising solution and finally a pumping rate strategy for groundwater remediation was presented for the best compromising solution. The results show that the cost incurred for the best compromising solution is intermediate between the highest and lowest cost incurred for other non-dominated solutions.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
New Capabilities for Adaptive Mesh Simulation Use within FORWARD
Mathews, N.; Flyer, N.; Gibson, S. E.; Kucera, T. A.; Manchester, W.
2016-12-01
The multiscale nature of the solar corona can pose challenges to numerical simulations. Adaptive meshes are often used to resolve fine-scale structures, such as the chromospheric-coronal interface found in prominences and the transition region as a whole. FORWARD is a SolarSoft IDL package designed as a community resource for creating a broad range of synthetic coronal observables from numerical models and comparing them to data. However, to date its interface with numerical simulations has been limited to regular grids. We will present a new adaptive-grid interface to FORWARD that will enable efficient synthesis of solar observations. This is accomplished through the use of hierarchical IDL structures designed to enable finding nearest-neighbor points quickly for non-uniform grids. This facilitates line-of-sight integrations that can adapt to the unequally spaced mesh. We will demonstrate this capability for the Alfven-Wave driven SOlar wind Model (AWSOM), part of the Space Weather Modeling Framework (SWMF). In addition, we will use it in the context of a prominence-cavity model, highlighting new capabilities in FORWARD that allow treatment of continuum absorbtion as well as EUV line emission via dual populations (chromosphere-corona).
Dynamically adaptive data-driven simulation of extreme hydrological flows
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Dynamically adaptive data-driven simulation of extreme hydrological flows
Kumar Jain, Pushkar
2017-12-27
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Adaptive hybrid simulations for multiscale stochastic reaction networks
International Nuclear Information System (INIS)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-01
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Direct numerical simulation of bubbles with parallelized adaptive mesh refinement
International Nuclear Information System (INIS)
Talpaert, A.
2015-01-01
The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)
Directory of Open Access Journals (Sweden)
Walther Rogério Buzzo
2000-12-01
Full Text Available Este artigo trata do problema de programação de tarefas flow shop permutacional. Diversos métodos heurísticos têm sido propostos para tal problema, sendo que um dos tipos de método consiste em melhorar soluções iniciais a partir de procedimentos de busca no espaço de soluções, tais como Algoritmo Genético (AG e Simulated Annealing (SA. Uma idéia interessante que tem despertado gradativa atenção refere-se ao desenvolvimento de métodos heurísticos híbridos utilizando Algoritmo Genético e Simulated Annealing. Assim, o objetivo é combinar as técnicas de tal forma que o procedimento resultante seja mais eficaz do que qualquer um dos seus componentes isoladamente. Neste artigo é apresentado um método heurístico híbrido Algoritmo Genético-Simulated Annealing para minimizar a duração total da programação flow shop permutacional. Com o propósito de avaliar a eficácia da hibridização, o método híbrido é comparado com métodos puros AG e SA. Os resultados obtidos a partir de uma experimentação computacional são apresentados.This paper deals with the Permutation Flow Shop Scheduling problem. Many heuristic methods have been proposed for this scheduling problem. A class of such heuristics finds a good solution by improving initial sequences for the jobs through search procedures on the solution space as Genetic Algorithm (GA and Simulated Annealing (SA. A promising approach for the problem is the formulation of hybrid metaheuristics by combining GA and SA techniques so that the consequent procedure is more effective than either pure GA or SA methods. In this paper we present a hybrid Genetic Algorithm-Simulated Annealing heuristic for the minimal makespan flow shop sequencing problem. In order to evaluate the effectiveness of the hybridization we compare the hybrid heuristic with both pure GA and SA heuristics. Results from computational experience are presented.
Hydrodynamics in adaptive resolution particle simulations: Multiparticle collision dynamics
Energy Technology Data Exchange (ETDEWEB)
Alekseeva, Uliana, E-mail: Alekseeva@itc.rwth-aachen.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); German Research School for Simulation Sciences (GRS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Winkler, Roland G., E-mail: r.winkler@fz-juelich.de [Theoretical Soft Matter and Biophysics, Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Sutmann, Godehard, E-mail: g.sutmann@fz-juelich.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); ICAMS, Ruhr-University Bochum, D-44801 Bochum (Germany)
2016-06-01
A new adaptive resolution technique for particle-based multi-level simulations of fluids is presented. In the approach, the representation of fluid and solvent particles is changed on the fly between an atomistic and a coarse-grained description. The present approach is based on a hybrid coupling of the multiparticle collision dynamics (MPC) method and molecular dynamics (MD), thereby coupling stochastic and deterministic particle-based methods. Hydrodynamics is examined by calculating velocity and current correlation functions for various mixed and coupled systems. We demonstrate that hydrodynamic properties of the mixed fluid are conserved by a suitable coupling of the two particle methods, and that the simulation results agree well with theoretical expectations.
Simulated Guide Stars: Adapting the Robo-AO Telescope Simulator to UH 88”
Ashcraft, Jaren; Baranec, Christoph
2018-01-01
Robo-AO is an autonomous adaptive optics system that is in development for the UH 88” Telescope on the Mauna Kea Observatory. This system is capable of achieving near diffraction limited imaging for astronomical telescopes, and has seen successful deployment and use at the Palomar and Kitt Peak Observatories previously. A key component of this system, the telescope simulator, will be adapted from the Palomar Observatory design to fit the UH 88” Telescope. The telescope simulator will simulate the exit pupil of the UH 88” telescope so that the greater Robo-AO system can be calibrated before observing runs. The system was designed in Code V, and then further improved upon in Zemax for later development. Alternate design forms were explored for the potential of adapting the telescope simulator to the NASA Infrared Telescope Facility, where simulating the exit pupil of the telescope proved to be more problematic. A proposed design composed of solely catalog optics was successfully produced for both telescopes, and they await assembly as time comes to construct the new Robo-AO system.
Adapting to life: simulating an ecosystem within an unstructured adaptive mesh ocean model
Hill, J.; Piggott, M. D.; Popova, E. E.; Ham, D. A.; Srokosz, M. A.
2010-12-01
Ocean oligotrophic gyres are characterised by low rates of primary production. Nevertheless their great area, covering roughly a third of the Earth's surface, and probably constituting the largest ecosystem on the planet means that they play a crucial role in global biogeochemistry. Current models give values of primary production two orders of magnitude lower than those observed, thought to be due to the non-resolution of sub-mesoscale phenomena, which play a significant role in nutrient supply in such areas. However, which aspects of sub-mesoscale processes are responsible for the observed higher productivity is an open question. Existing models are limited by two opposing requirements: to have high enough spatial resolution to resolve fully the processes involved (down to order 1km) and the need to realistically simulate the full gyre. No model can currently satisfy both of these constraints. Here, we detail Fluidity-ICOM, a non-hydrostatic, finite-element, unstructured mesh ocean model. Adaptive mesh techniques allow us to focus resolution where and when we require it. We present the first steps towards performing a full North Atlantic simulation, by showing that adaptive mesh techniques can be used in conjunction with both turbulent parametrisations and ecosystems models in psuedo-1D water columns. We show that the model can successfully reproduce the annual variation of the mixed layer depth at keys locations within the North Atlantic gyre, with adaptive meshing producing more accurate results than the fixed mesh simulations, with fewer degrees of freedom. Moreover, the model is capable of reproducing the key behaviour of the ecosystem in those locations.
International Nuclear Information System (INIS)
Jung, Woo Sik
1993-02-01
This study presents and efficient methodology that derives design alternatives and performance criteria of safety functions/systems in commercial nuclear power plants. Determination of design alternatives and intermediate-level performance criteria is posed as a reliability allocation problem. The reliability allocation is performed for determination of reliabilities of safety functions/systems from top-level performance criteria. The reliability allocation is a very difficult multi objective optimization problem (MOP) as well as a global optimization problem with many local minima. The weighted Chebyshev norm (WCN) approach in combination with an improved Metropolis algorithm of simulated annealing is developed and applied to the reliability allocation problem. The hierarchy of probabilistic safety criteria (PSC) may consist of three levels, which ranges from the overall top level (e.g., core damage frequency, acute fatality and latent cancer fatality) through the interlnediate level (e.g., unavailiability of safety system/function) to the low level (e.g., unavailability of components, component specifications or human error). In order to determine design alternatives of safety functions/systems and the intermediate-level PSC, the reliability allocation is performed from the top-level PSC. The intermediated level corresponds to an objective space and the top level is related to a risk space. The reliability allocation is performed by means of a concept of two-tier noninferior solutions in the objective and risk spaces within the top-level PSC. In this study, two kinds of towtier noninferior solutions are defined: intolerable intermediate-level PSC and desirable design alternatives of safety functions/systems that are determined from Sets 1 and 2, respectively. Set 1 is obtained by maximizing simultaneously not only safety function/system unavailabilities but also risks. Set 1 reflects safety function/system unavailabilities in the worst case. Hence, the
Simulation of nonpoint source contamination based on adaptive mesh refinement
Kourakos, G.; Harter, T.
2014-12-01
Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.
Doostparast Torshizi, Abolfazl; Fazel Zarandi, Mohammad Hossein
2015-09-01
This paper considers microarray gene expression data clustering using a novel two stage meta-heuristic algorithm based on the concept of α-planes in general type-2 fuzzy sets. The main aim of this research is to present a powerful data clustering approach capable of dealing with highly uncertain environments. In this regard, first, a new objective function using α-planes for general type-2 fuzzy c-means clustering algorithm is represented. Then, based on the philosophy of the meta-heuristic optimization framework 'Simulated Annealing', a two stage optimization algorithm is proposed. The first stage of the proposed approach is devoted to the annealing process accompanied by its proposed perturbation mechanisms. After termination of the first stage, its output is inserted to the second stage where it is checked with other possible local optima through a heuristic algorithm. The output of this stage is then re-entered to the first stage until no better solution is obtained. The proposed approach has been evaluated using several synthesized datasets and three microarray gene expression datasets. Extensive experiments demonstrate the capabilities of the proposed approach compared with some of the state-of-the-art techniques in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Barati, Ramin
2014-01-01
Highlights: • An innovative optimization technique for multi-objective optimization is presented. • The technique utilizes combination of CA and quasi-simulated annealing. • Mass and deformation of fuel plate are considered as objective functions. • Computational burden is significantly reduced compared to classic tools. - Abstract: This paper presents a new and innovative optimization technique utilizing combination of cellular automata (CA) and quasi-simulated annealing (QSA) as solver concerning conceptual design optimization which is indeed a multi-objective optimization problem. Integrating CA and QSA into a unified optimizer tool has a great potential for solving multi-objective optimization problems. Simulating neighborhood effects while taking local information into account from CA and accepting transitions based on decreasing of objective function and Boltzmann distribution from QSA as transition rule make this tool effective in multi-objective optimization. Optimization of fuel plate safety design while taking into account major goals of conceptual design such as improving reliability and life-time – which are the most significant elements during shutdown – is a major multi-objective optimization problem. Due to hugeness of search space in fuel plate optimization problem, finding optimum solution in classical methods requires a huge amount of calculation and CPU time. The CA models, utilizing local information, require considerably less computation. In this study, minimizing both mass and deformation of fuel plate of a multipurpose research reactor (MPRR) are considered as objective functions. Results, speed, and qualification of proposed method are comparable with those of genetic algorithm and neural network methods applied to this problem before
Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations
Energy Technology Data Exchange (ETDEWEB)
Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc
2016-09-12
While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.
International Nuclear Information System (INIS)
Angland, P.; Haberberger, D.; Ivancic, S. T.; Froula, D. H.
2017-01-01
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of the χ2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.
ALADYN - a spatially explicit, allelic model for simulating adaptive dynamics.
Schiffers, Katja H; Travis, Justin Mj
2014-12-01
ALADYN is a freely available cross-platform C++ modeling framework for stochastic simulation of joint allelic and demographic dynamics of spatially-structured populations. Juvenile survival is linked to the degree of match between an individual's phenotype and the local phenotypic optimum. There is considerable flexibility provided for the demography of the considered species and the genetic architecture of the traits under selection. ALADYN facilitates the investigation of adaptive processes to spatially and/or temporally changing conditions and the resulting niche and range dynamics. To our knowledge ALADYN is so far the only model that allows a continuous resolution of individuals' locations in a spatially explicit landscape together with the associated patterns of selection.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Directory of Open Access Journals (Sweden)
Bailing Liu
2015-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.
Energy Technology Data Exchange (ETDEWEB)
Estevez H, O.; Duque, J. [Universidad de La Habana, Instituto de Ciencia y Tecnologia de Materiales, 10400 La Habana (Cuba); Rodriguez H, J. [UNAM, Instituto de Investigaciones en Materiales, 04510 Mexico D. F. (Mexico); Yee M, H., E-mail: oestevezh@yahoo.com [Instituto Politecnico Nacional, Escuela Superior de Fisica y Matematicas, 07738 Mexico D. F. (Mexico)
2015-07-01
1-Furoyl-3,3-diphenylthiourea (FDFT) was synthesized, and characterized by Ftir, {sup 1}H and {sup 13}C NMR and ab initio X-ray powder structure analysis. FDFT crystallizes in the monoclinic space group P2{sub 1} with a = 12.691(1), b = 6.026(2), c = 11.861(1) A, β = 117.95(2) and V = 801.5(3) A{sup 3}. The crystal structure has been determined from laboratory X-ray powder diffraction data using direct space global optimization strategy (simulated annealing) followed by the Rietveld refinement. The thiourea group makes a dihedral angle of 73.8(6) with the furoyl group. In the crystal structure, molecules are linked by van der Waals interactions, forming one-dimensional chains along the a axis. (Author)
Directory of Open Access Journals (Sweden)
Yanhui Li
2013-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
International Nuclear Information System (INIS)
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-01
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Directory of Open Access Journals (Sweden)
Marco A. C. Benvenga
2011-10-01
Full Text Available Kinetic simulation and drying process optimization of corn malt by Simulated Annealing (SA for estimation of temperature and time parameters in order to preserve maximum amylase activity in the obtained product are presented here. Germinated corn seeds were dried at 54-76 °C in a convective dryer, with occasional measurement of moisture content and enzymatic activity. The experimental data obtained were submitted to modeling. Simulation and optimization of the drying process were made by using the SA method, a randomized improvement algorithm, analogous to the simulated annealing process. Results showed that seeds were best dried between 3h and 5h. Among the models used in this work, the kinetic model of water diffusion into corn seeds showed the best fitting. Drying temperature and time showed a square influence on the enzymatic activity. Optimization through SA showed the best condition at 54 ºC and between 5.6h and 6.4h of drying. Values of specific activity in the corn malt were found between 5.26±0.06 SKB/mg and 15.69±0,10% of remaining moisture.Este trabalho objetivou a simulação da cinética e a otimização do processo de secagem do malte de milho por meio da técnica Simulated Annealing (SA, para estimação dos parâmetros de temperatura e tempo, tais que mantenham a atividade máxima das enzimas amilases no produto obtido. Para tanto, as sementes de milho germinadas foram secas entre 54-76°C, em um secador convectivo de ar. De tempo em tempo, a umidade e a atividade enzimática foram medidas. Esses dados experimentais foram usados para testar os modelos. A simulação e a otimização do processo foram feitas por meio do método SA, um algoritmo de melhoria randômica, análogo ao processo de têmpera simulada. Os resultados mostram que as sementes estavam secas após 3 h ou 5 h de secagem. Entre os modelos usados, o modelo cinético de difusão da água através das sementes apresentou o melhor ajuste. O tempo e a temperatura
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-01-01
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application
Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Liang, Faming
2010-04-08
In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.
International Nuclear Information System (INIS)
Boucard, F.; Roger, F.; Chakarov, I.; Zhuk, V.; Temkin, M.; Montagner, X.; Guichard, E.; Mathiot, D.
2005-01-01
This paper presents a global approach permitting accurate simulation of the process of ultra-shallow junctions. Physically based models of dopant implantation (BCA) and diffusion (including point and extended defects coupling) are integrated within a unique simulation tool. A useful set of the relevant parameters has been obtained through an original calibration methodology. It is shown that this approach provides an efficient tool for process modelling
Energy Technology Data Exchange (ETDEWEB)
Boucard, F. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France)]. E-mail: Frederic.Boucard@silvaco.com; Roger, F. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Chakarov, I. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Zhuk, V. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Temkin, M. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Montagner, X. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Guichard, E. [Silvaco Data Systems, 55 Rue Blaise Pascal, F38330 Montbonnot (France); Mathiot, D. [InESS, CNRS and Universite Louis Pasteur, 23 Rue du Loess, F67037 Strasbourg (France)]. E-mail: Daniel.Mathiot@iness.c-strasbourg.fr
2005-12-05
This paper presents a global approach permitting accurate simulation of the process of ultra-shallow junctions. Physically based models of dopant implantation (BCA) and diffusion (including point and extended defects coupling) are integrated within a unique simulation tool. A useful set of the relevant parameters has been obtained through an original calibration methodology. It is shown that this approach provides an efficient tool for process modelling.
Cross-section adjustment techniques for BWR adaptive simulation
Jessee, Matthew Anderson
Computational capability has been developed to adjust multi-group neutron cross-sections to improve the fidelity of boiling water reactor (BWR) modeling and simulation. The method involves propagating multi-group neutron cross-section uncertainties through BWR computational models to evaluate uncertainties in key core attributes such as core k-effective, nodal power distributions, thermal margins, and in-core detector readings. Uncertainty-based inverse theory methods are then employed to adjust multi-group cross-sections to minimize the disagreement between BWR modeling predictions and measured plant data. For this work, measured plant data were virtually simulated in the form of perturbed 3-D nodal power distributions with discrepancies with predictions of the same order of magnitude as expected from plant data. Using the simulated plant data, multi-group cross-section adjustment reduces the error in core k-effective to less than 0.2% and the RMS error in nodal power to 4% (i.e. the noise level of the in-core instrumentation). To ensure that the adapted BWR model predictions are robust, Tikhonov regularization is utilized to control the magnitude of the cross-section adjustment. In contrast to few-group cross-section adjustment, which was the focus of previous research on BWR adaptive simulation, multigroup cross-section adjustment allows for future fuel cycle design optimization to include the determination of optimal fresh fuel assembly designs using the adjusted multi-group cross-sections. The major focus of this work is to efficiently propagate multi-group neutron cross-section uncertainty through BWR lattice physics calculations. Basic neutron cross-section uncertainties are provided in the form of multi-group cross-section covariance matrices. For energy groups in the resolved resonance energy range, the cross-section uncertainties are computed using an infinitely-dilute approximation of the neutron flux. In order to accurately account for spatial and
Jia, F.; Lichti, D.
2017-09-01
The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.
Simulating adaptive wood harvest in a changing climate
Yousefpour, Rasoul; Nabel, Julia; Pongratz, Julia
2016-04-01
The world's forest experience substantial carbon exchange fluxes between land and atmosphere. Large carbon sinks occur in response to changes in environmental conditions (such as climate change and increased atmospheric CO2 concentrations), removing about one quarter of current anthropogenic CO2-emissions. Large sinks also occur due to regrowth of forest on areas of agricultural abandonment or forest management. Forest management, on the other hand, also leads to substantial amounts of carbon being eventually released to the atmosphere. Both sinks and sources attributable to forests are therefore dependent on the intensity of management. Forest management in turn depends on the availability of resources, which is influenced by environmental conditions and sustainability of management systems applied. Estimating future carbon fluxes therefore requires accounting for the interaction of environmental conditions, forest growth, and management. However, this interaction is not fully captured by current modeling approaches: Earth system models depict in detail interactions between climate, the carbon cycle, and vegetation growth, but use prescribed information on management. Resource needs and land management, however, are simulated by Integrated Assessment Models that typically only have coarse representations of the influence of environmental changes on vegetation growth and are typically based on the demand for wood driven by regional population growth and energy needs. Here we present a study that provides the link between environmental conditions, forest growth and management. We extend the land component JSBACH of the Max Planck Institute's Earth system model (MPI-ESM) to simulate potential wood harvest in response to altered growth conditions and thus as adaptive to changing climate and CO2 conditions. We apply the altered model to estimate potential wood harvest for future climates (representative concentration pathways, RCPs) for the management scenario of
Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C
2015-01-01
The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature. Copyright © 2015 Elsevier B.V. All rights reserved.
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
Learner-Adaptive Educational Technology for Simulation in Healthcare: Foundations and Opportunities.
Lineberry, Matthew; Dev, Parvati; Lane, H Chad; Talbot, Thomas B
2018-01-17
Despite evidence that learners vary greatly in their learning needs, practical constraints tend to favor "one-size-fits-all" educational approaches, in simulation-based education as elsewhere. Adaptive educational technologies - devices and/or software applications that capture and analyze relevant data about learners to select and present individually tailored learning stimuli - are a promising aid in learners' and educators' efforts to provide learning experiences that meet individual needs. In this article, we summarize and build upon the 2017 Society for Simulation in Healthcare Research Summit panel discussion on adaptive learning. First, we consider the role of adaptivity in learning broadly. We then outline the basic functions that adaptive learning technologies must implement and the unique affordances and challenges of technology-based approaches for those functions, sharing an illustrative example from healthcare simulation. Finally, we consider future directions for accelerating research, development, and deployment of effective adaptive educational technology and techniques in healthcare simulation.
International Nuclear Information System (INIS)
Beck, L.; Jeynes, C.; Barradas, N.P.
2008-01-01
Particle induced X-ray emission (PIXE) is now routinely used for analyzing paint layers. Various setups have been developed to investigate the elemental composition of samples or wood/canvas paintings. However, the characterisation of paint layers is difficult due to their layered structure and due to the presence of organic binders. Also, standard PIXE codes do not support the quantitation of depth profiles in the general case. Elastic backscattering (both Rutherford and non-Rutherford) is usually used in ion beam analysis to determine depth profiles. However, traditional data processing using iteration between standard PIXE codes and particle scattering simulation codes is very time consuming and does not always give satisfactory results. Using two PIXE detectors and one particle detector recording simultaneously in an external beam geometry, we have applied a global minimisation code to all three spectra to solve these depth profiles self-consistently. This data treatment was applied to various different cases of paint layers and we demonstrate that the structures can be solved unambiguously, assuming that roughness effects do not introduce ambiguity
Adaptive Training Considerations for Use in Simulation-Based Systems
2010-09-01
partial and non-AT ( Tennyson & Rothen, 1977). Trainees also showed an increase in motor skills with AT (Cote, Williges, & Williges, 1981; Johnson...Aptitudes, learner control and adaptive instruction. Educational Psychologist, 15, 151-158. * Tennyson , R. D., & Rothen, W. (1977). Pretask and on-task...also adapting the instruction to the student by changing such conditions as display time, sequence, format of examples, etc. ( Tennyson et al, 1998
Computerized adaptive measurement of depression: A simulation study
Directory of Open Access Journals (Sweden)
Mammen Oommen
2004-05-01
Full Text Available Abstract Background Efficient, accurate instruments for measuring depression are increasingly important in clinical practice. We developed a computerized adaptive version of the Beck Depression Inventory (BDI. We examined its efficiency and its usefulness in identifying Major Depressive Episodes (MDE and in measuring depression severity. Methods Subjects were 744 participants in research studies in which each subject completed both the BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale. Results The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%, equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21 items. The adaptive latent depression score correlated r = .92 with the BDI total score and the latent depression score correlated more highly with the Hamilton (r = .74 than the BDI total score did (r = .70. Conclusions Adaptive testing for depression may provide greatly increased efficiency without loss of accuracy in identifying MDE or in measuring depression severity.
Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors
Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.
2017-12-01
We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data
National Aeronautics and Space Administration — The innovation proposed here is a fidelity-adaptive combustion model (FAM) implemented into the Loci-STREAM CFD code for use at NASA for simulation of rocket...
Computer simulation program is adaptable to industrial processes
Schultz, F. E.
1966-01-01
The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.
A student-adaptive system for driving simulation
Weevers, I.; Weevers, I.; Nijholt, Antinus; van Dijk, Elisabeth M.A.G.; Kuipers, J.; Zwiers, Jakob; Brugman, A.; Lovell, B.C.; Campbell, D.A.; Fookes, C.B.; Maeder, A.J.
2003-01-01
Driving simulators have to be student-oriented. We created the Virtual Driving Instructor (VDI), an intelligent tutoring multiagent system, which provides studentadaptivity. The VDI enhances the interaction between the driving simulator and the student. It uses regressive instruction and feedback,
Energy Technology Data Exchange (ETDEWEB)
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David
2012-04-11
A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling
Directory of Open Access Journals (Sweden)
Maikel Méndez-Morales
2014-09-01
Full Text Available En este artículo se presenta la aplicación del algoritmo Simulated Annealing (SA en el diseño óptimo de un sistema de distribución de agua (SDA. El SA es un algoritmo metaheurístico de búsqueda, basado en una analogía entre el proceso de recocido en metales (proceso controlado de enfriamiento de un cuerpo y la solución de problemas de optimización combinatorios. El algoritmo SA, junto con diversos modelos matemáticos, ha sido utilizado exitosamente en el óptimo diseño de SDA. Como caso de estudio se utilizó el SDA a escala real de la comunidad de Marsella, en San Carlos, Costa Rica. El algoritmo SA fue implementado mediante el conocido modelo EPANET, a través de la extensión WaterNetGen. Se compararon tres diferentes variaciones automatizadas del algoritmo SA con el diseño manual del SDA Marsella llevado a cabo a prueba y error, utilizando únicamente costos unitarios de tuberías. Los resultados muestran que los tres esquemas automatizados del SA arrojaron costos unitarios por debajo del 0.49 como fracción, respecto al costo original del esquema de diseño ejecutado a prueba y error. Esto demuestra que el algoritmo SA es capaz de optimizar problemas combinatorios ligados al diseño de mínimo costo de los sistemas de distribución de agua a escala real.
Developing adaptive user interfaces using a game-based simulation environment
Brake, G.M. te; Greef, T.E. de; Lindenberg, J.; Rypkema, J.A.; Smets-Noor, N.J.J.M.
2006-01-01
In dynamic settings, user interfaces can provide more optimal support if they adapt to the context of use. Providing adaptive user interfaces to first responders may therefore be fruitful. A cognitive engineering method that incorporates development iterations in both a simulated and a real-world
Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture
Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne
2017-01-01
This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…
Harris Simulator Design Description for Adaptive Distributed Network Management System
National Research Council Canada - National Science Library
1986-01-01
... (ADNMS), Naval Research Laboratory (NRL). The document describes the Harris Simulator used to support the development and test of a first generation network management algorithm for a typical SDI communications network...
Rumore, D.; Kirshen, P. H.; Susskind, L.
2014-12-01
Despite scientific consensus that the climate is changing, local efforts to prepare for and manage climate change risks remain limited. How we can raise concern about climate change risks and enhance local readiness to adapt to climate change's effects? In this presentation, we will share the lessons learned from the New England Climate Adaptation Project (NECAP), a participatory action research project that tested science-based role-play simulations as a tool for educating the public about climate change risks and simulating collective risk management efforts. NECAP was a 2-year effort involving the Massachusetts Institute of Technology, the Consensus Building Institute, the National Estuarine Research Reserve System, and four coastal New England municipalities. During 2012-2013, the NECAP team produced downscaled climate change projections, a summary risk assessment, and a stakeholder assessment for each partner community. Working with local partners, we used these assessments to create a tailored, science-based role-play simulation for each site. Through a series of workshops in 2013, NECAP engaged between 115-170 diverse stakeholders and members of the public in each partner municipality in playing the simulation and a follow up conversation about local climate change risks and possible adaptation strategies. Data were collected through before-and-after surveys administered to all workshop participants, follow-up interviews with 25 percent of workshop participants, public opinion polls conducted before and after our intervention, and meetings with public officials. This presentation will report our research findings and explain how science-based role-play simulations can be used to help communicate local climate change risks and enhance local readiness to adapt.
The adaptation method in the Monte Carlo simulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)
2015-06-15
The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
The adaptation method in the Monte Carlo simulation for computed tomography
Directory of Open Access Journals (Sweden)
Hyounggun Lee
2015-06-01
Full Text Available The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT. To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA and a human-like voxel phantom (KTMAN-2 (Los Alamos National Laboratory, Los Alamos, NM, USA. For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations—assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
Why is adaptation prevented at ecological margins? New insights from individual-based simulations.
Bridle, Jon R; Polechová, Jitka; Kawata, Masakado; Butlin, Roger K
2010-04-01
All species are restricted in their distribution. Currently, ecological models can only explain such limits if patches vary in quality, leading to asymmetrical dispersal, or if genetic variation is too low at the margins for adaptation. However, population genetic models suggest that the increase in genetic variance resulting from dispersal should allow adaptation to almost any ecological gradient. Clearly therefore, these models miss something that prevents evolution in natural populations. We developed an individual-based simulation to explore stochastic effects in these models. At high carrying capacities, our simulations largely agree with deterministic predictions. However, when carrying capacity is low, the population fails to establish for a wide range of parameter values where adaptation was expected from previous models. Stochastic or transient effects appear critical around the boundaries in parameter space between simulation behaviours. Dispersal, gradient steepness, and population density emerge as key factors determining adaptation on an ecological gradient.
Development of an adaptive sawmill- flow simulator template for ...
African Journals Online (AJOL)
Simulation is one of the most common methods for constructing models that include random behaviour of a large number and a wide variety of components in sawmilling such as reduced availability of large-diameter logs with increased wood demands which may result into smaller-diameter logs entering sawmills.
Complex adaptative systems and computational simulation in Archaeology
Directory of Open Access Journals (Sweden)
Salvador Pardo-Gordó
2017-07-01
Full Text Available Traditionally the concept of ‘complexity’ is used as a synonym for ‘complex society’, i.e., human groups with characteristics such as urbanism, inequalities, and hierarchy. The introduction of Nonlinear Systems and Complex Adaptive Systems to the discipline of archaeology has nuanced this concept. This theoretical turn has led to the rise of modelling as a method of analysis of historical processes. This work has a twofold objective: to present the theoretical current characterized by generative thinking in archaeology and to present a concrete application of agent-based modelling to an archaeological problem: the dispersal of the first ceramic production in the western Mediterranean.
Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator
Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.
2017-05-01
Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.
Adaptation to a simulated central scotoma during visual search training.
Walsh, David V; Liu, Lei
2014-03-01
Patients with a central scotoma usually use a preferred retinal locus (PRL) consistently in daily activities. The selection process and time course of the PRL development are not well understood. We used a gaze-contingent display to simulate an isotropic central scotoma in normal subjects while they were practicing a difficult visual search task. As compared to foveal search, initial exposure to the simulated scotoma resulted in prolonged search reaction time, many more fixations and unorganized eye movements during search. By the end of a 1782-trial training with the simulated scotoma, the search performance improved to within 25% of normal foveal search. Accompanying the performance improvement, there were also fewer fixations, fewer repeated fixations in the same area of the search stimulus and a clear tendency of using one area near the border of the scotoma to identify the search target. The results were discussed in relation to natural development of PRL in central scotoma patients and potential visual training protocols to facilitate PRL development. Published by Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Adam D. McCurdy
Full Text Available Changes in regional temperature and precipitation patterns resulting from global climate change may adversely affect the performance of long-lived infrastructure. Adaptation may be necessary to ensure that infrastructure offers consistent service and remains cost effective. But long service times and deep uncertainty associated with future climate projections make adaptation decisions especially challenging for managers. Incorporating flexibility into systems can increase their effectiveness across different climate futures but can also add significant costs. In this paper we review existing work on flexibility in climate change adaptation of infrastructure, such as robust decision-making and dynamic adaptive pathways, apply a basic typology of flexibility, and test alternative strategies for flexibility in distributed infrastructure systems comprised of multiple emplacements of a common, long-lived element: roadway culverts. Rather than treating a system of dispersed infrastructure elements as monolithic, we simulate “options flexibility” in which inherent differences in individual elements is incorporated into adaptation decisions. We use a virtual testbed of highway drainage crossing structures to examine the performance under different climate scenarios of policies that allow for multiple adaptation strategies with varying timing based on individual emplacement characteristics. Results indicate that a strategy with options flexibility informed by crossing characteristics offers a more efficient method of adaptation than do monolithic policies. In some cases this results in more cost-effective adaptation for agencies building long-lived, climate-sensitive infrastructure, even where detailed system data and analytical capacity is limited. Keywords: Climate adaptation, Stormwater management, Adaptation pathways
Adaptation of MCORTEX to the AEGIS Simulation Environment.
1984-06-01
real-time functions of the AEGIS weapons system and incorporation of valid simulation procesos for test and evaluation of the total system. The INTFL...ile, 19E-1. 3. 1 i ta 1 Research, ?-L/I Lanizuage Reference Manual , 19;--2. 9. Dieital ;esearch, ProgrAmmer’s Utilities Guide for the --------- Famiy...GuilAe fror &OBO/eOP5; vased Development Systjems 1980 12. INTEL Corperattor. ISIS-II PL/1M-&F C ompil1 r Operator’s Manual , 1979. 13. INTEL Corperation
International Nuclear Information System (INIS)
Zheng Han; Zhang Yingkai
2008-01-01
We propose a new adaptive sampling approach to determine free energy profiles with molecular dynamics simulations, which is called as 'repository based adaptive umbrella sampling' (RBAUS). Its main idea is that a sampling repository is continuously updated based on the latest simulation data, and the accumulated knowledge and sampling history are then employed to determine whether and how to update the biasing umbrella potential for subsequent simulations. In comparison with other adaptive methods, a unique and attractive feature of the RBAUS approach is that the frequency for updating the biasing potential depends on the sampling history and is adaptively determined on the fly, which makes it possible to smoothly bridge nonequilibrium and quasiequilibrium simulations. The RBAUS method is first tested by simulations on two simple systems: a double well model system with a variety of barriers and the dissociation of a NaCl molecule in water. Its efficiency and applicability are further illustrated in ab initio quantum mechanics/molecular mechanics molecular dynamics simulations of a methyl-transfer reaction in aqueous solution
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
An adaptive algorithm for simulation of stochastic reaction-diffusion processes
International Nuclear Information System (INIS)
Ferm, Lars; Hellander, Andreas; Loetstedt, Per
2010-01-01
We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.
Numerical simulation of supersonic over/under expanded jets using adaptive grid
International Nuclear Information System (INIS)
Talebi, S.; Shirani, E.
2001-05-01
Numerical simulation of supersonic under and over expanded jet was simulated. In order to achieve the solution efficiently and with high resolution, adaptive grid is used. The axisymmetric compressible, time dependent Navier-Stokes equations in body fitted curvilinear coordinate were solved numerically. The equations were discretized by using control volume, and the Van Leer flux splitting approach. The equations were solved implicitly. The obtained computer code was used to simulate four different cases of moderate and strong under and over expanded jet flows. The results show that with the adaptation of the grid, the various features of this complicated flow can be observed. It was shown that the adaptation method is very efficient and has the ability to make fine grids near the high gradient regions. (author)
Cluster Optimization and Parallelization of Simulations with Dynamically Adaptive Grids
Schreiber, Martin
2013-01-01
The present paper studies solvers for partial differential equations that work on dynamically adaptive grids stemming from spacetrees. Due to the underlying tree formalism, such grids efficiently can be decomposed into connected grid regions (clusters) on-the-fly. A graph on those clusters classified according to their grid invariancy, workload, multi-core affinity, and further meta data represents the inter-cluster communication. While stationary clusters already can be handled more efficiently than their dynamic counterparts, we propose to treat them as atomic grid entities and introduce a skip mechanism that allows the grid traversal to omit those regions completely. The communication graph ensures that the cluster data nevertheless are kept consistent, and several shared memory parallelization strategies are feasible. A hyperbolic benchmark that has to remesh selected mesh regions iteratively to preserve conforming tessellations acts as benchmark for the present work. We discuss runtime improvements resulting from the skip mechanism and the implications on shared memory performance and load balancing. © 2013 Springer-Verlag.
Role-play simulations for climate change adaptation education and engagement
Rumore, Danya; Schenk, Todd; Susskind, Lawrence
2016-08-01
In order to effectively adapt to climate change, public officials and other stakeholders need to rapidly enhance their understanding of local risks and their ability to collaboratively and adaptively respond to them. We argue that science-based role-play simulation exercises -- a type of 'serious game' involving face-to-face mock decision-making -- have considerable potential as education and engagement tools for enhancing readiness to adapt. Prior research suggests role-play simulations and other serious games can foster public learning and encourage collective action in public policy-making contexts. However, the effectiveness of such exercises in the context of climate change adaptation education and engagement has heretofore been underexplored. We share results from two research projects that demonstrate the effectiveness of role-play simulations in cultivating climate change adaptation literacy, enhancing collaborative capacity and facilitating social learning. Based on our findings, we suggest such exercises should be more widely embraced as part of adaptation professionals' education and engagement toolkits.
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
An adaptive DES smodel that allows wall-resolved eddy simulation
International Nuclear Information System (INIS)
Yin, Zifei; Durbin, Paul A.
2016-01-01
Highlights: • A Detached Eddy Simulation model that mimics dynamic Smagorinsky formulation. • Adaptivity of model allows wall resolved eddy simulation on sufficient grids. • Ability to simulate natural and bypass transition is tested. - Abstract: A modification to the Adaptive-DES method of Yin et al. (2015) is proposed to improve its near-wall behavior. The modification is to the function (C lim ) that imposes a lower limit on the dynamically evaluated coefficient (C DES ). The modification allows Adaptive-DES to converge to wall-resolved eddy simulation, when grid resolution supports it. On coarse grids, or at high Reynolds number, it reverts to shielded DES — that is to DDES. The new formulation predicts results closer to wall-resolved LES than the previous formulation. It provides an ability to simulate transition: it is tested in both orderly and bypass transition. In fully turbulent, attached flow, the modification has little effect. Any improvement in predictions stem from better near-wall behavior of the adaptive method.
Resolution-Adapted All-Atomic and Coarse-Grained Model for Biomolecular Simulations.
Shen, Lin; Hu, Hao
2014-06-10
We develop here an adaptive multiresolution method for the simulation of complex heterogeneous systems such as the protein molecules. The target molecular system is described with the atomistic structure while maintaining concurrently a mapping to the coarse-grained models. The theoretical model, or force field, used to describe the interactions between two sites is automatically adjusted in the simulation processes according to the interaction distance/strength. Therefore, all-atomic, coarse-grained, or mixed all-atomic and coarse-grained models would be used together to describe the interactions between a group of atoms and its surroundings. Because the choice of theory is made on the force field level while the sampling is always carried out in the atomic space, the new adaptive method preserves naturally the atomic structure and thermodynamic properties of the entire system throughout the simulation processes. The new method will be very useful in many biomolecular simulations where atomistic details are critically needed.
Dynamically adaptive Lattice Boltzmann simulation of shallow water flows with the Peano framework
Neumann, Philipp
2015-09-01
© 2014 Elsevier Inc. All rights reserved. We present a dynamically adaptive Lattice Boltzmann (LB) implementation for solving the shallow water equations (SWEs). Our implementation extends an existing LB component of the Peano framework. We revise the modular design with respect to the incorporation of new simulation aspects and LB models. The basic SWE-LB implementation is validated in different breaking dam scenarios. We further provide a numerical study on stability of the MRT collision operator used in our simulations.
Numerical simulations of multicomponent ecological models with adaptive methods.
Owolabi, Kolade M; Patidar, Kailash C
2016-01-08
The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that
Yao, Yao; Marchal, Kathleen; Van de Peer, Yves
2014-01-01
One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN). An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store 'good behaviour' and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment.
Directory of Open Access Journals (Sweden)
Yao Yao
Full Text Available One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN. An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store 'good behaviour' and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment.
Yao, Yao; Marchal, Kathleen; Van de Peer, Yves
2014-01-01
One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN). An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store ‘good behaviour’ and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment. PMID:24599485
Largenet2: an object-oriented programming library for simulating large adaptive networks.
Zschaler, Gerd; Gross, Thilo
2013-01-15
The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://biond.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. gerd@biond.org
Adaptive finite element method assisted by stochastic simulation of chemical systems
Czech Academy of Sciences Publication Activity Database
Cotter, S.L.; Vejchodský, Tomáš; Erban, R.
2013-01-01
Roč. 35, č. 1 (2013), B107-B131 ISSN 1064-8275 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional support: RVO:67985840 Keywords : chemical Fokker-Planck * adaptive meshes * stochastic simulation algorithm Subject RIV: BA - General Mathematics Impact factor: 1.940, year: 2013 http://epubs.siam.org/doi/abs/10.1137/120877374
Adaptation of a widespread epiphytic fern to simulated climate change conditions
Hsu, R.C.C.; Oostermeijer, J.G.B.; Wolf, J.H.D.
2014-01-01
The response of species to climate change is generally studied using ex situ manipulation of microclimate or by modeling species range shifts under simulated climate scenarios. In contrast, a reciprocal transplant experiment was used to investigate the in situ adaptive response of the elevationally
Simulating streamer discharges in 3D with the parallel adaptive Afivo framework
H.J. Teunissen (Jannis); U. M. Ebert (Ute)
2017-01-01
htmlabstractWe present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges, based on the Afivo framework that features adaptive mesh refinement, geometric multigrid methods for Poisson's equation, and OpenMP parallelism. We describe the numerical
Simulation Research on Adaptive Control of a Six-degree-of-freedom Material-testing Machine
Directory of Open Access Journals (Sweden)
Dan Wang
2014-02-01
Full Text Available This paper presents an adaptive controller equipped with a stiffness estimation method for a novel material-testing machine, in order to alleviate the performance depression caused by the stiffness variance of the tested specimen. The dynamic model of the proposed machine is built using the Kane method, and kinematic model is established with a closed-form solution. The stiffness estimation method is developed based on the recursive least-squares method and the proposed stiffness equivalent matrix. Control performances of the adaptive controller are simulated in detail. The simulation results illustrate that the proposed controller can greatly improve the control performance of the target material-testing machine by online stiffness estimation and adaptive parameter tuning, especially in low-cycle fatigue (LCF and high-cycle fatigue (HCF tests.
Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.
2013-01-01
People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477
Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L
2014-01-01
People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.
Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses
International Nuclear Information System (INIS)
Bargalló, Enric; Sureda, Pere Joan; Arroyo, Jose Manuel; Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel
2014-01-01
Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown
Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses
Energy Technology Data Exchange (ETDEWEB)
Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)
2014-10-15
Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.
Adaptive grids and numerical fluid simulations for scrape-off layer plasmas
International Nuclear Information System (INIS)
Klingshirn, Hans-Joachim
2010-01-01
Magnetic confinement nuclear fusion experiments create plasmas with local temperatures in excess of 100 million Kelvin. In these experiments the scrape-off layer, which is the plasma region in direct contact with the device wall, is of central importance both for the quality of the energy confinement and the wall material lifetime. To study the behaviour of the scrape-off layer, in addition to experiments, numerical simulations are used. This work investigates the use of adaptive discretizations of space and compatible numerical methods for scrape-off layer simulations. The resulting algorithms allow dynamic adaptation of computational grids aligned to the magnetic fields to precisely capture the strongly anisotropic energy and particle transport in the plasma. The methods are applied to the multi-fluid plasma code B2, with the goal of reducing the runtime of simulations and extending the applicability of the code.
Control of suspended low-gravity simulation system based on self-adaptive fuzzy PID
Chen, Zhigang; Qu, Jiangang
2017-09-01
In this paper, an active suspended low-gravity simulation system is proposed to follow the vertical motion of the spacecraft. Firstly, working principle and mathematical model of the low-gravity simulation system are shown. In order to establish the balance process and suppress the strong position interference of the system, the idea of self-adaptive fuzzy PID control strategy is proposed. It combines the PID controller with a fuzzy controll strategy, the control system can be automatically adjusted by changing the proportional parameter, integral parameter and differential parameter of the controller in real-time. At last, we use the Simulink tools to verify the performance of the controller. The results show that the system can reach balanced state quickly without overshoot and oscillation by the method of the self-adaptive fuzzy PID, and follow the speed of 3m/s, while simulation degree of accuracy of system can reach to 95.9% or more.
Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.
We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.
Optimal design of wastewater treatment plant using adaptive ...
African Journals Online (AJOL)
From this work, it has been found that artificial intelligence based optimization techniques such as adaptive simulated annealing is found to be suitable for the optimal design of wastewater treatment plant. Journal of Applied Sciences and Environmental Management Vol. 9(1) 2005: 107-113. AJOL African Journals Online.
Adaptive finite element simulation of flow and transport applications on parallel computers
Kirk, Benjamin Shelton
The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the
Görbil, Gökçe; Gelenbe, Erol
The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.
Directory of Open Access Journals (Sweden)
Robert eBauer
2015-02-01
Full Text Available Restorative brain-computer interfaces (BCI are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation.In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
Leo, Jennifer; Goodwin, Donna
2014-04-01
Disability simulations have been used as a pedagogical tool to simulate the functional and cultural experiences of disability. Despite their widespread application, disagreement about their ethical use, value, and efficacy persists. The purpose of this study was to understand how postsecondary kinesiology students experienced participation in disability simulations. An interpretative phenomenological approach guided the study's collection of journal entries and clarifying one-on-one interviews with four female undergraduate students enrolled in a required adapted physical activity course. The data were analyzed thematically and interpreted using the conceptual framework of situated learning. Three themes transpired: unnerving visibility, negotiating environments differently, and tomorrow I'll be fine. The students described emotional responses to the use of wheelchairs as disability artifacts, developed awareness of environmental barriers to culturally and socially normative activities, and moderated their discomfort with the knowledge they could end the simulation at any time.
Quantum annealing for combinatorial clustering
Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph
2018-02-01
Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.
Deterministic quantum annealing expectation-maximization algorithm
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Investigation of parameter estimator and adaptive controller for assist pump by computer simulation.
Shimooka, T; Mitamura, Y; Yuhta, T
1991-04-01
The multi-output adaptive controller of a left ventricular assist device (LVAD) was studied by computer simulation. The controller regulated two outputs--mean aortic pressure (mAoP) and mean atrial pressure (mLAP)--by regulating vacuum pressure (input). The autoregressive models were used to describe the circulatory system. The parameters of the models were estimated by the recursive least squares method. Based on the autoregressive models, the vacuum pressure minimizing a performance index was searched. The index used was the weighted summation of the square errors. Responses of the adaptive controller were simulated when the contractility of the left ventricle was decreased at various rates and the peripheral resistance was changed. Both the mAoP and mLAP were controlled to their predicted values in the steady state. The steady-state errors of the mAoP were less than a few mm Hg, and those of the mLAP were lower than 1 mm Hg. Consequently, the estimated parameters can be regarded as true parameters, and the adaptive controller has the potential to control more than two outputs. The multioutput adaptive controller studied is useful in controlling the LVAD according to the change in circulatory condition.
Population annealing: Theory and application in spin glasses
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-01-01
Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar ...
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations
Calo, Victor M.
2011-05-14
In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.
A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations
Omelchenko, Yuri
2007-04-01
A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.
Simulating streamer discharges in 3D with the parallel adaptive Afivo framework
Teunissen, Jannis; Ebert, Ute
2017-11-01
We present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges. The code is based on the Afivo framework, which features adaptive mesh refinement on quadtree/octree grids, geometric multigrid methods for Poisson’s equation, and OpenMP parallelism. We describe the numerical implementation of a fluid model of the drift-diffusion-reaction type, combined with the local field approximation. Then we demonstrate its functionality with 3D simulations of long positive streamers in nitrogen in undervolted gaps. Three examples are presented. The first one shows how a stochastic background density affects streamer propagation and branching. The second one focuses on the interaction of a streamer with preionized regions, and the third one investigates the interaction between two streamers. The simulations use up to 108 grid cells and run in less than a day; without mesh refinement they would require more than 1012 grid cells.
Stochl, Jan; Böhnke, Jan R; Pickett, Kate E; Croudace, Tim J
2016-06-01
Goldberg's General Health Questionnaire (GHQ) items are frequently used to assess psychological distress but no study to date has investigated the GHQ-30's potential for adaptive administration. In computerized adaptive testing (CAT) items are matched optimally to the targeted distress level of respondents instead of relying on fixed-length versions of instruments. We therefore calibrate GHQ-30 items and report a simulation study exploring the potential of this instrument for adaptive administration in a longitudinal setting. GHQ-30 responses of 3445 participants with 2 completed assessments (baseline, 7-year follow-up) in the UK Health and Lifestyle Survey were calibrated using item response theory. Our simulation study evaluated the efficiency of CAT administration of the items, cross-sectionally and longitudinally, with different estimators, item selection methods, and measurement precision criteria. To yield accurate distress measurements (marginal reliability at least 0.90) nearly all GHQ-30 items need to be administered to most survey respondents in general population samples. When lower accuracy is permissible (marginal reliability of 0.80), adaptive administration saves approximately 2/3 of the items. For longitudinal applications, change scores based on the complete set of GHQ-30 items correlate highly with change scores from adaptive administrations. The rationale for CAT-GHQ-30 is only supported when the required marginal reliability is lower than 0.9, which is most likely to be the case in cross-sectional and longitudinal studies assessing mean changes in populations. Precise measurement of psychological distress at the individual level can be achieved, but requires the deployment of all 30 items.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
A simulation study for comparing testing statistics in response-adaptive randomization.
Gu, Xuemin; Lee, J Jack
2010-06-05
Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied. Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes. Among all asymptotic test statistics, the Cook's correction to chi-square test (TMC) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (TML) gives slightly inflated type I error and higher power as compared with TMC, but it is more robust against the unbalance in patient allocation. TMC and TML are usually the two test statistics with the highest power in different simulation scenarios. When focusing on TMC and TML, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Peter, Emanuel K.
2017-12-01
In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.
Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems
Omelchenko, Yuri; Karimabadi, Homayoun
2005-10-01
Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.
Efficiency of quantum vs. classical annealing in nonconvex learning problems.
Baldassi, Carlo; Zecchina, Riccardo
2018-02-13
Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. Copyright © 2018 the Author(s). Published by PNAS.
Directory of Open Access Journals (Sweden)
Joshua Rodewald
2016-10-01
Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.
Directory of Open Access Journals (Sweden)
Jana Hoymann
2016-06-01
Full Text Available Decision-makers in the fields of urban and regional planning in Germany face new challenges. High rates of urban sprawl need to be reduced by increased inner-urban development while settlements have to adapt to climate change and contribute to the reduction of greenhouse gas emissions at the same time. In this study, we analyze conflicts in the management of urban areas and develop integrated sustainable land use strategies for Germany. The spatial explicit land use change model Land Use Scanner is used to simulate alternative scenarios of land use change for Germany for 2030. A multi-criteria analysis is set up based on these scenarios and based on a set of indicators. They are used to measure whether the mitigation and adaptation objectives can be achieved and to uncover conflicts between these aims. The results show that the built-up and transport area development can be influenced both in terms of magnitude and spatial distribution to contribute to climate change mitigation and adaptation. Strengthening the inner-urban development is particularly effective in terms of reducing built-up and transport area development. It is possible to reduce built-up and transport area development to approximately 30 ha per day in 2030, which matches the sustainability objective of the German Federal Government for the year 2020. In the case of adaptation to climate change, the inclusion of extreme flood events in the context of spatial planning requirements may contribute to a reduction of the damage potential.
Selective adaptation in networks of heterogeneous populations: model, simulation, and experiment.
Directory of Open Access Journals (Sweden)
Avner Wallach
2008-02-01
Full Text Available Biological systems often change their responsiveness when subject to persistent stimulation, a phenomenon termed adaptation. In neural systems, this process is often selective, allowing the system to adapt to one stimulus while preserving its sensitivity to another. In some studies, it has been shown that adaptation to a frequent stimulus increases the system's sensitivity to rare stimuli. These phenomena were explained in previous work as a result of complex interactions between the various subpopulations of the network. A formal description and analysis of neuronal systems, however, is hindered by the network's heterogeneity and by the multitude of processes taking place at different time-scales. Viewing neural networks as populations of interacting elements, we develop a framework that facilitates a formal analysis of complex, structured, heterogeneous networks. The formulation developed is based on an analysis of the availability of activity dependent resources, and their effects on network responsiveness. This approach offers a simple mechanistic explanation for selective adaptation, and leads to several predictions that were corroborated in both computer simulations and in cultures of cortical neurons developing in vitro. The framework is sufficiently general to apply to different biological systems, and was demonstrated in two different cases.
International Development Research Centre (IDRC) Digital Library (Canada)
. Dar es Salaam. Durban. Bloemfontein. Antananarivo. Cape Town. Ifrane ... program strategy. A number of CCAA-supported projects have relevance to other important adaptation-related themes such as disaster preparedness and climate.
Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.
Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai
2014-07-02
We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.
Directory of Open Access Journals (Sweden)
S. D. Parkinson
2014-09-01
Full Text Available High-resolution direct numerical simulations (DNSs are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier–Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.
Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method
Agbaglah, Gilou; Delaux, Sébastien; Fuster, Daniel; Hoepffner, Jérôme; Josserand, Christophe; Popinet, Stéphane; Ray, Pascal; Scardovelli, Ruben; Zaleski, Stéphane
2011-02-01
We describe computations performed using the Gerris code, an open-source software implementing finite volume solvers on an octree adaptive grid together with a piecewise linear volume of fluid interface tracking method. The parallelisation of Gerris is achieved by domain decomposition. We show examples of the capabilities of Gerris on several types of problems. The impact of a droplet on a layer of the same liquid results in the formation of a thin air layer trapped between the droplet and the liquid layer that the adaptive refinement allows to capture. It is followed by the jetting of a thin corolla emerging from below the impacting droplet. The jet atomisation problem is another extremely challenging computational problem, in which a large number of small scales are generated. Finally we show an example of a turbulent jet computation in an equivalent resolution of 6×1024 cells. The jet simulation is based on the configuration of the Deepwater Horizon oil leak.
Adaptive learning in agents behaviour: A framework for electricity markets simulation
DEFF Research Database (Denmark)
Pinto, Tiago; Vale, Zita; Sousa, Tiago M.
2014-01-01
decision support to MASCEM's negotiating agents so that they can properly achieve their goals. ALBidS uses artificial intelligence methodologies and data analysis algorithms to provide effective adaptive learning capabilities to such negotiating entities. The main contribution is provided by a methodology...... that combines several distinct strategies to build actions proposals, so that the best can be chosen at each time, depending on the context and simulation circumstances. The choosing process includes reinforcement learning algorithms, a mechanism for negotiating contexts analysis, a mechanism for the management...... allows integrating different strategic approaches for electricity market negotiations, and choosing the most appropriate one at each time, for each different negotiation context. This methodology is integrated in ALBidS (Adaptive Learning strategic Bidding System) – a multiagent system that provides...
Cyberwar XXI: quantifying the unquantifiable: adaptive AI for next-generation conflict simulations
Miranda, Joseph; von Kleinsmid, Peter; Zalewski, Tony
2004-08-01
The era of the "Revolution in Military Affairs," "4th Generation Warfare" and "Asymmetric War" requires novel approaches to modeling warfare at the operational and strategic level of modern conflict. For example, "What if, in response to our planned actions, the adversary reacts in such-and-such a manner? What will our response be? What are the possible unintended consequences?" Next generation conflict simulation tools are required to help create and test novel courses of action (COA's) in support of real-world operations. Conflict simulations allow non-lethal and cost-effective exploration of the "what-if" of COA development. The challenge has been to develop an automated decision-support software tool which allows competing COA"s to be compared in simulated dynamic environments. Principal Investigator Joseph Miranda's research is based on modeling an integrated military, economic, social, infrastructure and information (PMESII) environment. The main effort was to develop an adaptive AI engine which models agents operating within an operational-strategic conflict environment. This was implemented in Cyberwar XXI - a simulation which models COA selection in a PMESII environment. Within this framework, agents simulate decision-making processes and provide predictive capability of the potential behavior of Command Entities. The 2003 Iraq is the first scenario ready for V&V testing.
Adaptive dynamic load-balancing with irregular domain decomposition for particle simulations
Begau, Christoph; Sutmann, Godehard
2015-05-01
We present a flexible and fully adaptive dynamic load-balancing scheme, which is designed for particle simulations of three-dimensional systems with short ranged interactions. The method is based on domain decomposition with non-orthogonal non-convex domains, which are constructed based on a local repartitioning of computational work between neighbouring processors. Domains are dynamically adjusted in a flexible way under the condition that the original topology is not changed, i.e. neighbour relations between domains are retained, which guarantees a fixed communication pattern for each domain during a simulation. Extensions of this scheme are discussed and illustrated with examples, which generalise the communication patterns and do not fully restrict data exchange to direct neighbours. The proposed method relies on a linked cell algorithm, which makes it compatible with existing implementations in particle codes and does not modify the underlying algorithm for calculating the forces between particles. The method has been implemented into the molecular dynamics community code IMD and performance has been measured for various molecular dynamics simulations of systems representing realistic problems from materials science. It is found that the method proves to balance the work between processors in simulations with strongly inhomogeneous and dynamically changing particle distributions, which results in a significant increase of the efficiency of the parallel code compared both to unbalanced simulations and conventional load-balancing strategies.
Shen, Lin; Yang, Weitao
2018-03-13
Direct molecular dynamics (MD) simulation with ab initio quantum mechanical and molecular mechanical (QM/MM) methods is very powerful for studying the mechanism of chemical reactions in a complex environment but also very time-consuming. The computational cost of QM/MM calculations during MD simulations can be reduced significantly using semiempirical QM/MM methods with lower accuracy. To achieve higher accuracy at the ab initio QM/MM level, a correction on the existing semiempirical QM/MM model is an attractive idea. Recently, we reported a neural network (NN) method as QM/MM-NN to predict the potential energy difference between semiempirical and ab initio QM/MM approaches. The high-level results can be obtained using neural network based on semiempirical QM/MM MD simulations, but the lack of direct MD samplings at the ab initio QM/MM level is still a deficiency that limits the applications of QM/MM-NN. In the present paper, we developed a dynamic scheme of QM/MM-NN for direct MD simulations on the NN-predicted potential energy surface to approximate ab initio QM/MM MD. Since some configurations excluded from the database for NN training were encountered during simulations, which may cause some difficulties on MD samplings, an adaptive procedure inspired by the selection scheme reported by Behler [ Behler Int. J. Quantum Chem. 2015 , 115 , 1032 ; Behler Angew. Chem., Int. Ed. 2017 , 56 , 12828 ] was employed with some adaptions to update NN and carry out MD iteratively. We further applied the adaptive QM/MM-NN MD method to the free energy calculation and transition path optimization on chemical reactions in water. The results at the ab initio QM/MM level can be well reproduced using this method after 2-4 iteration cycles. The saving in computational cost is about 2 orders of magnitude. It demonstrates that the QM/MM-NN with direct MD simulations has great potentials not only for the calculation of thermodynamic properties but also for the characterization of
Pawlik, Andreas H.; Schaye, Joop; Dalla Vecchia, Claudio
2015-08-01
We present a suite of cosmological radiation-hydrodynamical simulations of the assembly of galaxies driving the reionization of the intergalactic medium (IGM) at z ≳ 6. The simulations account for the hydrodynamical feedback from photoionization heating and the explosion of massive stars as supernovae (SNe). Our reference simulation, which was carried out in a box of size 25 h-1 comovingMpc using 2 × 5123 particles, produces a reasonable reionization history and matches the observed UV luminosity function of galaxies. Simulations with different box sizes and resolutions are used to investigate numerical convergence, and simulations in which either SNe or photoionization heating or both are turned off, are used to investigate the role of feedback from star formation. Ionizing radiation is treated using accurate radiative transfer at the high spatially adaptive resolution at which the hydrodynamics is carried out. SN feedback strongly reduces the star formation rates (SFRs) over nearly the full mass range of simulated galaxies and is required to yield SFRs in agreement with observations. Photoheating helps to suppress star formation in low-mass galaxies, but its impact on the cosmic SFR is small. Because the effect of photoheating is masked by the strong SN feedback, it does not imprint a signature on the UV galaxy luminosity function, although we note that our resolution is insufficient to model star-forming minihaloes cooling through molecular hydrogen transitions. Photoheating does provide a strong positive feedback on reionization because it smooths density fluctuations in the IGM, which lowers the IGM recombination rate substantially. Our simulations demonstrate a tight non-linear coupling of galaxy formation and reionization, motivating the need for the accurate and simultaneous inclusion of photoheating and SN feedback in models of the early Universe.
A novel agent-based simulation framework for sensing in complex adaptive environments
Niazi, Muaz A.; Hussain, Amir
2017-01-01
In this paper we present a novel Formal Agent-Based Simulation framework (FABS). FABS uses formal specification as a means of clear description of wireless sensor networks (WSN) sensing a Complex Adaptive Environment. This specification model is then used to develop an agent-based model of both the wireless sensor network as well as the environment. As proof of concept, we demonstrate the application of FABS to a boids model of self-organized flocking of animals monitored by a random deployme...
A space-time adaptive method for simulating complex cardiac dynamics.
Cherry, E M; Greenside, H S; Henriquez, C S
2000-02-07
For plane-wave and many-spiral states of the experimentally based Luo-Rudy 1 model of heart tissue in large (8 cm square) domains, we show that a space-time-adaptive time-integration algorithm can achieve a factor of 5 reduction in computational effort and memory-but without a reduction in accuracy-when compared to an algorithm using a uniform space-time mesh at the finest resolution. Our results indicate that such an algorithm can be extended straightforwardly to simulate quantitatively three-dimensional electrical dynamics over the whole human heart.
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
International Nuclear Information System (INIS)
Timofeev, E.V.; Tahir, R.B.; Voinovich, P.A.; Moelder, S.
2004-01-01
The concept of 'twin' grid nodes is discussed in the context of unstructured, adaptive meshes that are suitable for highly unsteady flows. The concept is applicable to internal boundary contours (within the computational domain) where the boundary conditions may need to be changed dynamically; for instance, an impermeable solid wall segment can be redefined as a fully permeable invisible boundary segment during the course of the simulation. This can be used to simulate unsteady gas flows with internal boundaries where the flow conditions may change rapidly and drastically. As a demonstration, the idea is applied to study the starting process in hypersonic air inlets by rupturing a diaphragm or by opening wall-perforations. (author)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Adaptive life simulator: A novel approach to modeling the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Kangas, L.J.; Keller, P.E.; Hashem, S. [and others
1995-06-01
In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations
Toosi, Siavash; Larsson, Johan
2015-11-01
Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.
3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks
International Nuclear Information System (INIS)
Samtaney, S.; Jardin, S.C.; Colella, P.; Martin, D.F.
2003-01-01
We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized
Analysis, adaptive control and circuit simulation of a novel finance system with dissaving
Directory of Open Access Journals (Sweden)
Tacha Ourania I.
2016-03-01
Full Text Available In this paper a novel 3-D nonlinear finance chaotic system consisting of two nonlinearities with negative saving term, which is called ‘dissaving’ is presented. The dynamical analysis of the proposed system confirms its complex dynamic behavior, which is studied by using wellknown simulation tools of nonlinear theory, such as the bifurcation diagram, Lyapunov exponents and phase portraits. Also, some interesting phenomena related with nonlinear theory are observed, such as route to chaos through a period doubling sequence and crisis phenomena. In addition, an interesting scheme of adaptive control of finance system’s behavior is presented. Furthermore, the novel nonlinear finance system is emulated by an electronic circuit and its dynamical behavior is studied by using the electronic simulation package Cadence OrCAD in order to confirm the feasibility of the theoretical model.
Refined adaptive optics simulation with wide field of view for the E-ELT
International Nuclear Information System (INIS)
Chebbo, Manal
2012-01-01
Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph
Akhmatskaya, Elena; Fernández-Pendás, Mario; Radivojević, Tijana; Sanz-Serna, J M
2017-10-24
The modified Hamiltonian Monte Carlo (MHMC) methods, i.e., importance sampling methods that use modified Hamiltonians within a Hybrid Monte Carlo (HMC) framework, often outperform in sampling efficiency standard techniques such as molecular dynamics (MD) and HMC. The performance of MHMC may be enhanced further through the rational choice of the simulation parameters and by replacing the standard Verlet integrator with more sophisticated splitting algorithms. Unfortunately, it is not easy to identify the appropriate values of the parameters that appear in those algorithms. We propose a technique, that we call MAIA (Modified Adaptive Integration Approach), which, for a given simulation system and a given time step, automatically selects the optimal integrator within a useful family of two-stage splitting formulas. Extended MAIA (or e-MAIA) is an enhanced version of MAIA, which additionally supplies a value of the method-specific parameter that, for the problem under consideration, keeps the momentum acceptance rate at a user-desired level. The MAIA and e-MAIA algorithms have been implemented, with no computational overhead during simulations, in MultiHMC-GROMACS, a modified version of the popular software package GROMACS. Tests performed on well-known molecular models demonstrate the superiority of the suggested approaches over a range of integrators (both standard and recently developed), as well as their capacity to improve the sampling efficiency of GSHMC, a noticeable method for molecular simulation in the MHMC family. GSHMC combined with e-MAIA shows a remarkably good performance when compared to MD and HMC coupled with the appropriate adaptive integrators.
Grain coarsening mechanism of Cu thin films by rapid annealing
International Nuclear Information System (INIS)
Sasajima, Yasushi; Kageyama, Junpei; Khoo, Khyoupin; Onuki, Jin
2010-01-01
Cu thin films have been produced by an electroplating method using nominal 9N anode and nominal 6N CuSO 4 .5H 2 O electrolyte. Film samples were heat-treated by two procedures: conventional isothermal annealing in hydrogen atmosphere (abbreviated as H 2 annealing) and rapid thermal annealing with an infrared lamp (abbreviated as RTA). After heat treatment, the average grain diameters and the grain orientation distributions were examined by electron backscattering pattern analysis. The RTA samples (400 o C for 5 min) have a larger average grain diameter, more uniform grain distribution and higher ratio of (111) orientation than H 2 annealed samples (400 o C for 30 min). This means that RTA can produce films with coarser and more uniformly distributed grains than H 2 annealing within a short time, i.e. only a few minutes. To clarify the grain coarsening mechanism, grain growth by RTA was simulated using the phase field method. The simulated grain diameter reaches its maximum at a heating rate which is the same order as that in the actual RTA experiment. The maximum grain diameter is larger than that obtained by H 2 annealing with the same annealing time at the isothermal stage as in RTA. The distribution of the misorientation was analyzed which led to a proposed grain growth model for the RTA method.
Energy Technology Data Exchange (ETDEWEB)
Manrique, John Peter O.; Costa, Alessandro M., E-mail: johnp067@usp.br, E-mail: amcosta@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil)
2016-07-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR{sub 20/10}. (author)
Ltaief, Hatem
2016-06-02
We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used as an operational testbed for simulating the de- sign of new instruments for the European Extremely Large Telescope project (E-ELT), the world\\'s biggest eye and one of Europe\\'s highest priorities in ground-based astronomy. The simulation corresponds to a multi-step multi-stage pro- cedure, which is fed, near real-time, by system and turbulence data coming from the telescope environment. Based on the PLASMA library powered by the OmpSs dynamic runtime system, our implementation relies on a task-based programming model to permit an asynchronous out-of-order execution. Using modern multicore architectures associated with the enormous computing power of GPUS, the resulting data-driven compute-intensive simulation of the entire MOAO application, composed of the tomographic reconstructor and the observing sequence, is capable of coping with the aforementioned real-time challenge and stands as a reference implementation for the computational astronomy community.
International Nuclear Information System (INIS)
Hummels, Cameron B.; Bryan, Greg L.
2012-01-01
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
Adaptation of non-technical skills behavioural markers for delivery room simulation.
Bracco, Fabrizio; Masini, Michele; De Tonetti, Gabriele; Brogioni, Francesca; Amidani, Arianna; Monichino, Sara; Maltoni, Alessandra; Dato, Andrea; Grattarola, Claudia; Cordone, Massimo; Torre, Giancarlo; Launo, Claudio; Chiorri, Carlo; Celleno, Danilo
2017-03-17
Simulation in healthcare has proved to be a useful method in improving skills and increasing the safety of clinical operations. The debriefing session, after the simulated scenario, is the core of the simulation, since it allows participants to integrate the experience with the theoretical frameworks and the procedural guidelines. There is consistent evidence for the relevance of non-technical skills (NTS) for the safe and efficient accomplishment of operations. However, the observation, assessment and feedback on these skills is particularly complex, because the process needs expert observers and the feedback is often provided in judgmental and ineffective ways. The aim of this study was therefore to develop and test a set of observation and rating forms for the NTS behavioural markers of multi-professional teams involved in delivery room emergency simulations (MINTS-DR, Multi-professional Inventory for Non-Technical Skills in the Delivery Room). The MINTS-DR was developed by adapting the existing tools and, when needed, by designing new tools according to the literature. We followed a bottom-up process accompanied by interviews and co-design between practitioners and psychology experts. The forms were specific for anaesthetists, gynaecologists, nurses/midwives, assistants, plus a global team assessment tool. We administered the tools in five editions of a simulation training course that involved 48 practitioners. Ratings on usability and usefulness were collected. The mean ratings of the usability and usefulness of the tools were not statistically different to or higher than 4 on a 5-point rating scale. In either case no significant differences were found across professional categories. The MINTS-DR is quick and easy to administer. It is judged to be a useful asset in maximising the learning experience that is provided by the simulation.
Djuana, E.; Rahardjo, K.; Gozali, F.; Tan, S.; Rambung, R.; Adrian, D.
2018-01-01
A city could be categorized as a smart city when the information technology has been developed to the point that the administration could sense, understand, and control every resource to serve its people and sustain the development of the city. One of the smart city aspects is transportation and traffic management. This paper presents a research project to design an adaptive traffic lights control system as a part of the smart system for optimizing road utilization and reducing congestion. Research problems presented include: (1) Congestion in one direction toward an intersection due to dynamic traffic condition from time to time during the day, while the timing cycles in traffic lights system are mostly static; (2) No timing synchronization among traffic lights in adjacent intersections that is causing unsteady flows; (3) Difficulties in traffic condition monitoring on the intersection and the lack of facility for remotely controlling traffic lights. In this research, a simulator has been built to model the adaptivity and integration among different traffic lights controllers in adjacent intersections, and a case study consisting of three sets of intersections along Jalan K. H. Hasyim Ashari has been simulated. It can be concluded that timing slots synchronization among traffic lights is crucial for maintaining a steady traffic flow.
Simulating local adaptation to climate of forest trees with a Physio-Demo-Genetics model.
Oddou-Muratorio, Sylvie; Davi, Hendrik
2014-04-01
One challenge of evolutionary ecology is to predict the rate and mechanisms of population adaptation to environmental variations. The variations in most life history traits are shaped both by individual genotypic and by environmental variation. Forest trees exhibit high levels of genetic diversity, large population sizes, and gene flow, and they also show a high level of plasticity for life history traits. We developed a new Physio-Demo-Genetics model (denoted PDG) coupling (i) a physiological module simulating individual tree responses to the environment; (ii) a demographic module simulating tree survival, reproduction, and pollen and seed dispersal; and (iii) a quantitative genetics module controlling the heritability of key life history traits. We used this model to investigate the plastic and genetic components of the variations in the timing of budburst (TBB) along an elevational gradient of Fagus sylvatica (the European beech). We used a repeated 5 years climatic sequence to show that five generations of natural selection were sufficient to develop nonmonotonic genetic differentiation in the TBB along the local climatic gradient but also that plastic variation among different elevations and years was higher than genetic variation. PDG complements theoretical models and provides testable predictions to understand the adaptive potential of tree populations.
Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms
International Nuclear Information System (INIS)
Talpaert, Arthur
2017-01-01
This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr
Simulating spatial adaption of groundwater pumping on seawater intrusion in coastal regions
Grundmann, Jens; Ladwig, Robert; Schütze, Niels; Walther, Marc
2016-04-01
Coastal aquifer systems are used intensively to meet the growing demands for water in those regions. They are especially at risk for the intrusion of seawater due to aquifer overpumping, limited groundwater replenishment and unsustainable groundwater management which in turn also impacts the social and economical development of coastal regions. One example is the Al-Batinah coastal plain in northern Oman where irrigated agriculture is practiced by lots of small scaled farms in different distances from the sea, each of them pumping their water from coastal aquifer. Due to continuous overpumping and progressing saltwater intrusion farms near the coast had to close since water for irrigation got too saline. For investigating appropriate management options numerical density dependent groundwater modelling is required which should also portray the adaption of groundwater abstraction schemes on the water quality. For addressing this challenge a moving inner boundary condition is implemented in the numerical density dependent groundwater model which adjusts the locations for groundwater abstraction according to the position of the seawater intrusion front controlled by thresholds of relative chloride concentration. The adaption process is repeated for each management cycle within transient model simulations and allows for considering feedbacks with the consumers e.g. the agriculture by moving agricultural farms more inland or towards the sea if more fertile soils at the coast could be recovered. For finding optimal water management strategies efficiently, the behaviour of the numerical groundwater model for different extraction and replenishment scenarios is approximated by an artificial neural network using a novel approach for state space surrogate model development. Afterwards the derived surrogate is coupled with an agriculture module within a simulation based water management optimisation framework to achieve optimal cropping pattern and water abstraction schemes
Numerical simulation of strain-adaptive bone remodelling in the ankle joint
Directory of Open Access Journals (Sweden)
Stukenborg-Colsman Christina
2011-07-01
Full Text Available Abstract Background The use of artificial endoprostheses has become a routine procedure for knee and hip joints while ankle arthritis has traditionally been treated by means of arthrodesis. Due to its advantages, the implantation of endoprostheses is constantly increasing. While finite element analyses (FEA of strain-adaptive bone remodelling have been carried out for the hip joint in previous studies, to our knowledge there are no investigations that have considered remodelling processes of the ankle joint. In order to evaluate and optimise new generation implants of the ankle joint, as well as to gain additional knowledge regarding the biomechanics, strain-adaptive bone remodelling has been calculated separately for the tibia and the talus after providing them with an implant. Methods FE models of the bone-implant assembly for both the tibia and the talus have been developed. Bone characteristics such as the density distribution have been applied corresponding to CT scans. A force of 5,200 N, which corresponds to the compression force during normal walking of a person with a weight of 100 kg according to Stauffer et al., has been used in the simulation. The bone adaptation law, previously developed by our research team, has been used for the calculation of the remodelling processes. Results A total bone mass loss of 2% in the tibia and 13% in the talus was calculated. The greater decline of density in the talus is due to its smaller size compared to the relatively large implant dimensions causing remodelling processes in the whole bone tissue. In the tibia, bone remodelling processes are only calculated in areas adjacent to the implant. Thus, a smaller bone mass loss than in the talus can be expected. There is a high agreement between the simulation results in the distal tibia and the literature regarding. Conclusions In this study, strain-adaptive bone remodelling processes are simulated using the FE method. The results contribute to a better
International Development Research Centre (IDRC) Digital Library (Canada)
Nairobi, Kenya. 28 Adapting Fishing Policy to Climate Change with the Aid of Scientific and Endogenous Knowledge. Cap Verde, Gambia,. Guinea, Guinea Bissau,. Mauritania and Senegal. Environment and Development in the Third World. (ENDA-TM). Dakar, Senegal. 29 Integrating Indigenous Knowledge in Climate Risk ...
International Nuclear Information System (INIS)
Li, Taoran; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q.
2015-01-01
Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery
Radiation annealing in cuprous oxide
DEFF Research Database (Denmark)
Vajda, P.
1966-01-01
Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square-root relati......Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square...
Influence of alloying and secondary annealing on anneal hardening ...
Indian Academy of Sciences (India)
Unknown
Influence of alloying and secondary annealing on anneal hardening effect at sintered copper alloys. SVETLANA NESTOROVIC. Technical Faculty Bor, University of Belgrade, Bor, Yugoslavia. MS received 11 February 2004; revised 29 October 2004. Abstract. This paper reports results of investigation carried out on sintered ...
DOE's annealing prototype demonstration projects
International Nuclear Information System (INIS)
Warren, J.; Nakos, J.; Rochau, G.
1997-01-01
One of the challenges U.S. utilities face in addressing technical issues associated with the aging of nuclear power plants is the long-term effect of plant operation on reactor pressure vessels (RPVs). As a nuclear plant operates, its RPV is exposed to neutrons. For certain plants, this neutron exposure can cause embrittlement of some of the RPV welds which can shorten the useful life of the RPV. This RPV embrittlement issue has the potential to affect the continued operation of a number of operating U.S. pressurized water reactor (PWR) plants. However, RPV material properties affected by long-term irradiation are recoverable through a thermal annealing treatment of the RPV. Although a dozen Russian-designed RPVs and several U.S. military vessels have been successfully annealed, U.S. utilities have stated that a successful annealing demonstration of a U.S. RPV is a prerequisite for annealing a licensed U.S. nuclear power plant. In May 1995, the Department of Energy's Sandia National Laboratories awarded two cost-shared contracts to evaluate the feasibility of annealing U.S. licensed plants by conducting an anneal of an installed RPV using two different heating technologies. The contracts were awarded to the American Society of Mechanical Engineers (ASME) Center for Research and Technology Development (CRTD) and MPR Associates (MPR). The ASME team completed its annealing prototype demonstration in July 1996, using an indirect gas furnace at the uncompleted Public Service of Indiana's Marble Hill nuclear power plant. The MPR team's annealing prototype demonstration was scheduled to be completed in early 1997, using a direct heat electrical furnace at the uncompleted Consumers Power Company's nuclear power plant at Midland, Michigan. This paper describes the Department's annealing prototype demonstration goals and objectives; the tasks, deliverables, and results to date for each annealing prototype demonstration; and the remaining annealing technology challenges
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2014-01-01
This paper presents one-of-a-kind MPI-parallel computational fluid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft of a Boeing 747SP. These simulations focus on how the unsteady flow field inside and over the cavity interferes with the optical path and mounting of the telescope. A temporally fourth-order Runge-Kutta, and spatially fifth-order WENO-5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh refinement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32,000 cores and 4 billion cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregularities caused by the highly complex geometry. Limits to scaling beyond 32K cores are identified, and targeted code optimizations are discussed.
Topical problems of crackability in weld annealing of low-alloyed pressure vessel steels
International Nuclear Information System (INIS)
Holy, M.
1977-01-01
The following method was developed for determining annealing crackability: A sharp notch was made in the middle of the bodies of rods imitated in a welding simulator. Chucking heads were modified such as to permit chucking a rod in an austenitic block by securing the nut. Prestress was controlled by button-headed screw adapters. The blocks were made of 4 types of austenitic steels with graded thermal expansivity coefficients, all higher than that of the tested low-alloyed steel rod. The blocks with rods were placed in a furnace and heated at a rate of 100 degC/h. As a result of the larger austenite block diameter the rod began to be stretched and at some temperature of more than 500 degC it was pulled apart. The risk of annealing crackability of welded joints may be reduced by the choice of material and melt and by the technology of welding, mainly by the choice of a suitable addition material in whose weld metal the plastic deformation preferably takes place in annealing. (J.P.)
Radiation annealing in cuprous oxide
DEFF Research Database (Denmark)
Vajda, P.
1966-01-01
Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square...
Dynamical Frustration in ANNNI Model and Annealing
Sen, Parongama; Das, Pratap K.
Simulated annealing is usually applied to systems with frustration, like spin glasses and optimisation problems, where the energy landscape is complex with many spurious minima. There are certain other systems, however, which have very simple energy landscape picture and ground states, but still the system fails to reach its ground state during a energy-lowering dynamical process. This situation corresponds to "dynamical frustration ". We have specifically considered the case of the axial next nearest neighbour (ANNNI) chain, where such a situation is encountered. In Sect. II, we elaborate the notion of dynamic frustration with examples and in Sect. III, the dynamics in ANNNI model is discussed in detail. The results of application of the classical and quantum annealing are discussed in Sects. IV and V. Summary and some concluding comments are given in the last section.
SU-F-J-110: MRI-Guided Single-Session Simulation, Online Adaptation, and Treatment
International Nuclear Information System (INIS)
Hill, P; Geurts, M; Mittauer, K; Bayouth, J
2016-01-01
Purpose: To develop a combined simulation and treatment workflow for MRI-guided radiation therapy using the ViewRay treatment planning and delivery system. Methods: Several features of the ViewRay MRIdian planning and treatment workflows are used to simulate and treat patients that require emergent radiotherapy. A simple “pre-plan” is created on diagnostic imaging retrieved from radiology PACS, where conformal fields are created to target a volume defined by a physician based on review of the diagnostic images and chart notes. After initial consult in radiation oncology, the patient is brought to the treatment room, immobilized, and imaged in treatment position with a volumetric MR. While the patient rests on the table, the pre-plan is applied to the treatment planning MR and dose is calculated in the treatment geometry. After physician review, modification of the plan may include updating the target definition, redefining fields, or re-balancing beam weights. Once an acceptable treatment plan is finalized and approved, the patient is treated. Results: Careful preparation and judicious choices in the online planning process allow conformal treatment plans to be created and delivered in a single, thirty-minute session. Several advantages have been identified using this process as compared to conventional urgent CT simulation and delivery. Efficiency gains are notable, as physicians appreciate the predictable time commitment and patient waiting time for treatment is decreased. MR guidance in a treatment position offers both enhanced contrast for target delineation and reduction of setup uncertainties. The MRIdian system tools designed for adaptive radiotherapy are particularly useful, enabling plan changes to be made in minutes. Finally, the resulting plans, typically 6 conformal beams, are delivered as quickly as more conventional AP/PA beam arrangements with comparatively superior dose distributions. Conclusion: The ViewRay treatment planning software and
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Shetty, Rohit; Kochar, Shruti; Grover, Tushar; Khamar, Pooja; Kusumgar, Pallak; Sainani, Kanchan; Sinha Roy, Abhijit
2017-11-01
To evaluate the repeatability of aberration measurement obtained by a Hartmann-Shack aberrometer combined with a visual adaptive optics simulator in normal and keratoconic eyes. One hundred fifteen normal eyes and 92 eyes with grade I and II keratoconus, as per the Amsler-Krumeich classification, were included in the study. To evaluate the repeatability, three consecutive measurements of ocular aberrations were obtained by a single operator. Zernike analyses up to the 5th order for a pupil size of 4.5 mm were performed. Statistical analyses included the intraclass correlation coefficient (ICC) and within-subject standard deviation (SD). For intrasession repeatability, the ICC value for sphere and cylinder was 0.94 and 0.93 in normal eyes and 0.98 and 0.97 in keratoconic eyes, respectively. The ICC for root mean square of higher order aberrations (HOA RMS ) was 0.82 in normal and 0.98 in keratoconic eyes. For 3rd order aberrations (trefoil and coma), the ICC values were greater than 0.87 for normal eyes and greater than 0.92 for keratoconic eyes. The ICC for spherical aberration was 0.92 and 0.90 in normal and keratoconic eyes, respectively. Visual adaptive optics provided repeatable aberrometry data in both normal and keratoconic eyes. For most of the parameters, the repeatability in eyes with early keratoconus was somewhat better than that for normal eyes. The repeatability of the Zernike terms was acceptable for 3rd order (trefoil and coma) and spherical aberrations. Therefore, visual adaptive optics was a suitable tool to perform repeatable aberrometric measurements. [J Refract Surg. 2017;33(11):769-772.]. Copyright 2017, SLACK Incorporated.
Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert
Energy Technology Data Exchange (ETDEWEB)
Efthimiou, Nikos [Technological Educational Institute of Athens (Greece); Kostou, Theodora; Papadimitroulas, Panagiotis [Technological Educational Institute of Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras (Greece); Charalampos, Tsoumpas [Division of Biomedical Imaging, University of Leeds, Leeds (United Kingdom); Loudos, George [Technological Educational Institute of Athens (Greece)
2015-05-18
Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.
Overall simulation of a HTGR plant with the gas adapted MANTA code
International Nuclear Information System (INIS)
Emmanuel Jouet; Dominique Petit; Robert Martin
2005-01-01
Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)
Initial reconstruction results from a simulated adaptive small animal C shaped PET/MR insert
International Nuclear Information System (INIS)
Efthimiou, Nikos; Kostou, Theodora; Papadimitroulas, Panagiotis; Charalampos, Tsoumpas; Loudos, George
2015-01-01
Traditionally, most clinical and preclinical PET scanners, rely on full cylindrical geometry for whole body as well as dedicated organ scans, which is not optimized with regards to sensitivity and resolution. Several groups proposed the construction of dedicated PET inserts for MR scanners, rather than the construction of new integrated PET/MR scanners. The space inside an MR scanner is a limiting factor which can be reduced further with the use of extra coils, and render the use of non-flexible cylindrical PET scanners difficult if not impossible. The incorporation of small SiPM arrays, can provide the means to design adaptive PET scanners to fit in tight locations, which, makes imaging possible and improve the sensitivity, due to the closer approximation to the organ of interest. In order to assess the performance of such a device we simulated the geometry of a C shaped PET, using GATE. The design of the C-PET was based on a realistic SiPM-BGO scenario. In order reconstruct the simulated data, with STIR, we had to calculate system probability matrix which corresponds to this non standard geometry. For this purpose we developed an efficient multi threaded ray tracing technique to calculate the line integral paths in voxel arrays. One of the major features is the ability to automatically adjust the size of FOV according to the geometry of the detectors. The initial results showed that the sensitivity improved as the angle between the detector arrays increases, thus better angular sampling the scanner's field of view (FOV). The more complete angular coverage helped in improving the shape of the source in the reconstructed images, as well. Furthermore, by adapting the FOV to the closer to the size of the source, the sensitivity per voxel is improved.
Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment
Directory of Open Access Journals (Sweden)
Yao Yao
2016-12-01
Full Text Available We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.
Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment.
Yao, Yao; Storme, Veronique; Marchal, Kathleen; Van de Peer, Yves
2016-01-01
We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.
An adaptive grid refinement strategy for the simulation of negative streamers
International Nuclear Information System (INIS)
Montijn, C.; Hundsdorfer, W.; Ebert, U.
2006-01-01
The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers
Adaptive sliding mode control on inner axis for high precision flight motion simulator
Fu, Yongling; Niu, Jianjun; Wang, Yan
2008-10-01
Discrete adaptive sliding mode control (ASMC) with exponential reaching law is proposed to alleviate the influence of the factors such as the periodical fluctuation torque of motor, nonlinear friction, and other disturbance which will deteriorate the tracking performance of a DC torque motor driven inner axis for a high precision flight motion simulator, considering the limited compensating ability of the ASMC for these uncertainty, an equivalent friction advance compensator based on Stribeck model is also presented for extra-low speed servo of the system. Firstly, the way direct using the available parts of the inner axis itself to ascertain the parameters for Stribeck model is listed. Secondly, adaptive approach is used to overcome the difficulty of choice the key parameter for exponential reaching law, and the stability of the algorithm is analyzed. Lastly, comparable experiments are carried out to verify the valid of the combined approach. The experiments results show with a stable 0.00006°/s speed response, 95% of time the tracking error is within 0.0002°, other servos such as sine wave tracking are also with high precision.
Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment
Yao, Yao; Storme, Veronique; Marchal, Kathleen
2016-01-01
We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population. PMID:28028477
Elrad, Oren
2009-03-01
During the replication of many viruses, hundreds to thousands of protein subunits assemble around the viral nucleic acid to form a protein shell called a capsid. Most viruses form one particular structure with astonishing fidelity; yet, recent experiments demonstrate that capsids can assemble with different sizes and morphologies to accommodate nucleic acids or other cargoes such as functionalized nanoparticles. In this talk, we will explore the mechanisms of simultaneous assembly and cargo encapsidation with a computational model that describes the assembly of icosahedral capsids around functionalized nanoparticles. With this model, we find parameter values for which subunits faithfully form empty capsids with a single morphology, but adaptively assemble into different icosahedral morphologies around nanoparticles with different diameters. Analyzing trajectories in which adaptation is or is not successful sheds light on the mechanisms by which capsid morphology may be controlled in vitro and in vivo, and suggests experiments to test these mechanisms. We compare the simulation results to recent experiments in which Brome Mosaic Virus capsid proteins assemble around functionalized nanoparticles, and describe how future experiments can test the model predictions.
Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm
Mitra, Sunanda; Pemmaraju, Surya
1992-01-01
Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.
The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture
Azdoud, Yan
2014-04-19
We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain. © 2014 Springer-Verlag Berlin Heidelberg.
Shi, Zhenzhen; Wu, Chih-Hang J.; Ben-Arieh, David; Simpson, Steven Q.
2015-01-01
Sepsis is a systemic inflammatory response (SIR) to infection. In this work, a system dynamics mathematical model (SDMM) is examined to describe the basic components of SIR and sepsis progression. Both innate and adaptive immunities are included, and simulated results in silico have shown that adaptive immunity has significant impacts on the outcomes of sepsis progression. Further investigation has found that the intervention timing, intensity of anti-inflammatory cytokines, and initial patho...
Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.
2018-03-01
This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.
Energy Technology Data Exchange (ETDEWEB)
Sanchez Camacho, Enrique; Andreu Alvarez, Joaquin [Universidad Politecnica de Valencia (Spain)
2001-06-01
Two numerical procedures, based on the Genetic Algorithm (GA) and the Simulated Annealing (SA), are developed to solve the problem of the expansion of capacity of a water resource system. The problem was divided into two subproblems: capital availability and operation policy. Both are optimisation-simulation models, the first one is solved by means of the GA and SA, in each case, while the second one is solved using the Out-of-kilter algorithm (OKA), in both models. The objective function considers the usual benefits and costs in this kind of systems, such as irrigation and hydropower benefits, costs of dam construction and system maintenance. The strength and weakness of both models are evaluated by comparing their results with those obtained with the branch and bound technique, which was classically used to solve this kind of problems. [Spanish] Un par de metodos numericos fundamentados en dos tecnicas de busqueda globales. Algoritmos Genetico (AG) y Recocido Simulado (RS), son desarrollados para resolver el problema de expansion de capacidad de un sistema de recursos hidricos. La estrategia ha sido dividir al problema en dos subproblemas: el de disponibilidad de capital y el de la politica de operacion. Ambos modelos son de optimizacion-simulacion, el primero se realiza mediante los algoritmos del RS y el AG en cada caso, en tanto que el segundo lleva a cabo a traves del algoritmo del Out-of-kilter (AOK) en los dos modelos. La funcion objetivo con que se trabaja considera los beneficios y costos mas comunes en este tipo de sistema, tales como beneficios por riego, por hidroelectricidad y costos de construccion de los embalses y mantenimiento del sistema. La potencia y debilidades delos dos modelos se evaluan mediante la comparacion con los resultados obtenidos a traves de una de las tecnicas mas usadas en este tipo de problemas: la de ramificacion y acotacion.
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Daftari, I.; Phillips, T. L.
2003-06-01
A patient assembly adapter system for ocular melanoma patient simulation was developed and its performance evaluated. The aim for the construction of the apparatus was to simulate the patients in supine position using a commercial x-ray simulator. The apparatus consists of a base plate, head immobilization holder, patient assembly system that includes fixation light and collimator system. The reproducibility of the repeated fixation was initially tested with a head phantom. Simulation and verification films were studied for seven consecutive patients treated with proton beam therapy. Patient's simulation was performed in a supine position using a dental fixation bite block and a thermoplastic head mask immobilization device with a patient adapter system. Two orthogonal x rays were used to obtain the x, y, and z coordinates of sutured tantalum rings for treatment planning with the EYEPLAN software. The verification films were obtained in treatment position with the fixation light along the central axis of the eye. The results indicate good agreement within 0.5 mm deviations. The results of this investigation showed that the same planning accuracy could be achieved by performing simulation using the adapter described above with a patient in the supine position as that obtained by performing simulation with the patient in the seated, treatment position. The adapter can also be attached to the head of the chair for simulating in the seated position using a fixed x-ray unit. This has three advantages: (1) this will save radiation therapists time; (2) it eliminates the need for arranging access to the treatment room, thus avoiding potential conflicts in treatment room usage; and (3) it allows the use of a commercial simulator.
Biomolecular structure refinement based on adaptive restraints using local-elevation simulation
International Nuclear Information System (INIS)
Christen, Markus; Keller, Bettina; Gunsteren, Wilfred F. van
2007-01-01
Introducing experimental values as restraints into molecular dynamics (MD) simulation to bias the values of particular molecular properties, such as nuclear Overhauser effect intensities or distances, dipolar couplings, 3 J-coupling constants, chemical shifts or crystallographic structure factors, towards experimental values is a widely used structure refinement method. Because multiple torsion angle values φ correspond to the same 3 J-coupling constant and high-energy barriers are separating those, restraining 3 J-coupling constants remains difficult. A method to adaptively enforce restraints using a local elevation (LE) potential energy function is presented and applied to 3 J-coupling constant restraining in an MD simulation of hen egg-white lysozyme (HEWL). The method successfully enhances sampling of the restrained torsion angles until the 37 experimental 3 J-coupling constant values are reached, thereby also improving the agreement with the 1,630 experimental NOE atom-atom distance upper bounds. Afterwards the torsional angles φ are kept restrained by the built-up local-elevation potential energies
Smith, A.; Bates, P. D.; Freer, J. E.
2012-12-01
Modelled assessments of climate change impacts on flooding are now increasingly used to inform adaptation and mitigation policy. These modelled assessments are typically driven by Global and Regional Climate Models (GCM/RCM). However, opinion is divided on how best to proceed, particularly with regards to the feasibility and practicality of using climate model outputs to inform management strategies. Here RCM driven projections of extreme discharges are compared against the uncertainty present in the observed record. The run-off model HBV_light is applied, within the Generalised Likelihood Uncertainty Estimation (GLUE) framework, to the Upper Avon catchment in the Midlands of England, in the U.K. A 48 year observational record of rainfall and discharge was used, with non-behavioural parameter sets being rejected through an evaluation of continuous hydrograph simulation and annual maximum discharge. The output of an RCM ensemble was used, with differing ensemble approaches, to assess climate change impacts on extreme discharge. A daily stochastic rainfall generator was then applied to the observational record and used to simulate 2000 years of discharge. RCM driven changes in extreme discharge could then be compared against the variability present in the observed record. The results suggest that coping with present uncertainty in the observed record is already a significant challenge, with the range of uncertainty in a 1 in 100 year event eclipsing the uncertainty present in climate projections.
Fast simulation of transport and adaptive permeability estimation in porous media
Energy Technology Data Exchange (ETDEWEB)
Berre, Inga
2005-07-01
The focus of the thesis is twofold: Both fast simulation of transport in porous media and adaptive estimation of permeability are considered. A short introduction that motivates the work on these topics is given in Chapter 1. In Chapter 2, the governing equations for one- and two-phase flow in porous media are presented. Overall numerical solution strategies for the two-phase flow model are also discussed briefly. The concepts of streamlines and time-of-flight are introduced in Chapter 3. Methods for computing streamlines and time-of-flight are also presented in this chapter. Subsequently, in Chapters 4 and 5, the focus is on simulation of transport in a time-of-flight perspective. In Chapter 4, transport of fluids along streamlines is considered. Chapter 5 introduces a different viewpoint based on the evolution of isocontours of the fluid saturation. While the first chapters focus on the forward problem, which consists in solving a mathematical model given the reservoir parameters, Chapters 6, 7 and 8 are devoted to the inverse problem of permeability estimation. An introduction to the problem of identifying spatial variability in reservoir permeability by inversion of dynamic production data is given in Chapter 6. In Chapter 7, adaptive multiscale strategies for permeability estimation are discussed. Subsequently, Chapter 8 presents a level-set approach for improving piecewise constant permeability representations. Finally, Chapter 9 summarizes the results obtained in the thesis; in addition, the chapter gives some recommendations and suggests directions for future work. Part II In Part II, the following papers are included in the order they were completed: Paper A: A Streamline Front Tracking Method for Two- and Three-Phase Flow Including Capillary Forces. I. Berre, H. K. Dahle, K. H. Karlsen, and H. F. Nordhaug. In Fluid flow and transport in porous media: mathematical and numerical treatment (South Hadley, MA, 2001), volume 295 of Contemp. Math., pages 49
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego, E-mail: fabio@ucolick.org [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
Directory of Open Access Journals (Sweden)
M. Ahmadlou
2015-12-01
Full Text Available Land use change (LUC models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS, and a global parametric model called artificial neural network (ANN to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM and 2010 (ETM+ were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Olynick, David P.; Hassan, H. A.; Moss, James N.
1988-01-01
A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.
International Nuclear Information System (INIS)
Rank, Christopher M; Tremmel, Christoph; Hünemohr, Nora; Nagel, Armin M; Jäkel, Oliver; Greilich, Steffen
2013-01-01
In order to benefit from the highly conformal irradiation of tumors in ion radiotherapy, sophisticated treatment planning and simulation are required. The purpose of this study was to investigate the potential of MRI for ion radiotherapy treatment plan simulation and adaptation using a classification-based approach. Firstly, a voxelwise tissue classification was applied to derive pseudo CT numbers from MR images using up to 8 contrasts. Appropriate MR sequences and parameters were evaluated in cross-validation studies of three phantoms. Secondly, ion radiotherapy treatment plans were optimized using both MRI-based pseudo CT and reference CT and recalculated on reference CT. Finally, a target shift was simulated and a treatment plan adapted to the shift was optimized on a pseudo CT and compared to reference CT optimizations without plan adaptation. The derivation of pseudo CT values led to mean absolute errors in the range of 81 - 95 HU. Most significant deviations appeared at borders between air and different tissue classes and originated from partial volume effects. Simulations of ion radiotherapy treatment plans using pseudo CT for optimization revealed only small underdosages in distal regions of a target volume with deviations of the mean dose of PTV between 1.4 - 3.1% compared to reference CT optimizations. A plan adapted to the target volume shift and optimized on the pseudo CT exhibited a comparable target dose coverage as a non-adapted plan optimized on a reference CT. We were able to show that a MRI-based derivation of pseudo CT values using a purely statistical classification approach is feasible although no physical relationship exists. Large errors appeared at compact bone classes and came from an imperfect distinction of bones and other tissue types in MRI. In simulations of treatment plans, it was demonstrated that these deviations are comparable to uncertainties of a target volume shift of 2 mm in two directions indicating that especially
Hu, Jiwen; Ding, Yajun; Qian, Shengyou; Tang, Xiangde
2013-01-01
The control problem in ultrasound therapy is to destroy the tumor tissue while not harming the intervening healthy tissue with a desired temperature elevation. The objective of this research is to present a robust and feasible method to control the temperature distribution and the temperature elevation in treatment region within the prescribed time, which can improve the curative effect and decrease the treatment time for heating large tumor (≥2.0cm in diameter). An adaptive self-tuning-regulator (STR) controller has been introduced into this control method by adding a time factor with a recursive algorithm, and the speed of sound and absorption coefficient of the medium is considered as a function of temperature during heating. The presented control method is tested for a self-focused concave spherical transducer (0.5MHz, 9cm aperture, 8.0cm focal length) through numerical simulations with three control temperatures of 43°C, 50°C and 55°C. The results suggest that this control system has adaptive ability for variable parameters and has a rapid response to the temperature and acoustic power output in the prescribed time for the hyperthermia interest. There is no overshoot during temperature elevation and no oscillation after reaching the desired temperatures. It is found that the same results can be obtained for different frequencies and temperature elevations. This method can obtain an ellipsoid-shaped ablation region, which is meaningful for the treatment of large tumor. Copyright © 2012 Elsevier B.V. All rights reserved.
Open Source Tools for Adaptive Simulation of Fluid-Structure Interaction Processes
Kees, C. E.; Quezada de Luna, M.; Zhang, A.; Rakhsha, M.; de Lataillade, T.; Dimakopoulos, A.
2017-12-01
Surface and shallow subsurface structures often play critical roles incontrolling hydrological processes as well as in determining theperformance of large-scale civil works projects, such as flood andcoastal storm protection systems. Reliably predicting performance ofsuch structures requires coupling to larger scale models and fielddata for the hydraulic forcing while sometimes resolving down toscales ranging from meters to millimeters. These scales are dictatedby accuracy considerations specific to the analysis and processes inquestion. The hydraulics are often inherently three-dimensional andinvolve complex free-surface dynamics coupled to dynamic structuralresponse. In this presentation we will present recent work oncombining unstructured finite element methods with dynamicallyadaptive meshing tools to achieve simulation of structural responseunder surface and subsurface hydraulic forcing. The approach combinesseveral techniques, including dynamically redistributing boundaryfitted meshes (Arbitrary Lagrangian Eulerian methods), employingimmersed boundary approximations, and locally adapting computationalmeshes to achieve robust and and high-fidelity fluid-solid interactiondynamics at a reasonable computational cost. A key technologyunderpinning the approach is a stabilized finite element method forincompressible free-surface flows, which generalizes to higher-orderand is designed to minimize dependence on arbitrary parameters andmesh sensitivity while achieving qualitatively correct features suchas mass/volume conservation and discrete maximum principles. Aposteriori error estimates based on these qualitative features andunderlying velocity field accuracy are used to drive the meshadaptivity. Several estimates are being assessed on a range ofverification and validation problems for structures of interest invarious wave and hydraulic climates. The algorithms are combined inthe open source Proteus toolkit in order to provide a framework forbuilding parallel
An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography
Energy Technology Data Exchange (ETDEWEB)
Treiber, O [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Wanninger, F [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Fuehr, H [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Panzer, W [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Regulla, D [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Winkler, G [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany)
2003-02-21
This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.
Systematic testing of flood adaptation options in urban areas through simulations
Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten
2016-04-01
While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into
Our objective was to simulate the effect of child-friendly (CF) adaptations of the National Cancer Institute’s Automated Self-Administered 24-Hour Dietary Recall (ASA24) on estimates of nutrient intake. One hundred twenty children, 8–13 years old, entered their previous day’s intake using the ASA24 ...
Interference Alignment Using Variational Mean Field Annealing
DEFF Research Database (Denmark)
Badiu, Mihai Alin; Guillaud, Maxime; Fleury, Bernard Henri
2014-01-01
We study the problem of interference alignment in the multiple-input multiple- output interference channel. Aiming at minimizing the interference leakage power relative to the receiver noise level, we use the deterministic annealing approach to solve the optimization problem. In the corresponding...... for interference alignment. We also show that the iterative leakage minimization algorithm by Gomadam et al. and the alternating minimization algorithm by Peters and Heath, Jr. are instances of our method. Finally, we assess the performance of the proposed algorithm through computer simulations....
Flight Test of an Adaptive Controller and Simulated Failure/Damage on the NASA NF-15B
Buschbacher, Mark; Maliska, Heather
2006-01-01
The method of flight-testing the Intelligent Flight Control System (IFCS) Second Generation (Gen-2) project on the NASA NF-15B is herein described. The Gen-2 project objective includes flight-testing a dynamic inversion controller augmented by a direct adaptive neural network to demonstrate performance improvements in the presence of simulated failure/damage. The Gen-2 objectives as implemented on the NASA NF-15B created challenges for software design, structural loading limitations, and flight test operations. Simulated failure/damage is introduced by modifying control surface commands, therefore requiring structural loads measurements. Flight-testing began with the validation of a structural loads model. Flight-testing of the Gen-2 controller continued, using test maneuvers designed in a sequenced approach. Success would clear the new controller with respect to dynamic response, simulated failure/damage, and with adaptation on and off. A handling qualities evaluation was conducted on the capability of the Gen-2 controller to restore aircraft response in the presence of a simulated failure/damage. Control room monitoring of loads sensors, flight dynamics, and controller adaptation, in addition to postflight data comparison to the simulation, ensured a safe methodology of buildup testing. Flight-testing continued without major incident to accomplish the project objectives, successfully uncovering strengths and weaknesses of the Gen-2 control approach in flight.
Di Donato, Paola; Romano, Ida; Mastascusa, Vincenza; Poli, Annarita; Orlando, Pierangelo; Pugliese, Mariagabriella; Nicolaus, Barbara
2018-03-01
Astrobiology studies the origin and evolution of life on Earth and in the universe. According to the panspermia theory, life on Earth could have emerged from bacterial species transported by meteorites, that were able to adapt and proliferate on our planet. Therefore, the study of extremophiles, i.e. bacterial species able to live in extreme terrestrial environments, can be relevant to Astrobiology studies. In this work we described the ability of the thermophilic species Geobacillus thermantarcticus to survive after exposition to simulated spatial conditions including temperature's variation, desiccation, X-rays and UVC irradiation. The response to the exposition to the space conditions was assessed at a molecular level by studying the changes in the morphology, the lipid and protein patterns, the nucleic acids. G. thermantarcticus survived to the exposition to all the stressing conditions examined, since it was able to restart cellular growth in comparable levels to control experiments carried out in the optimal growth conditions. Survival was elicited by changing proteins and lipids distribution, and by protecting the DNA's integrity.
Adaptive optics binocular visual simulator to study stereopsis in the presence of aberrations.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2010-11-01
A binocular adaptive optics visual simulator has been devised for the study of stereopsis and of binocular vision in general. The apparatus is capable of manipulating the aberrations of each eye separately while subjects perform visual tests. The correcting device is a liquid-crystal-on-silicon spatial light modulator permitting the control of aberrations in the two eyes of the observer simultaneously in open loop. The apparatus can be operated as an electro-optical binocular phoropter with two micro-displays projecting different scenes to each eye. Stereo-acuity tests (three-needle test and random-dot stereograms) have been programmed for exploring the performance of the instrument. As an example, stereo-acuity has been measured in two subjects in the presence of defocus and/or trefoil, showing a complex relationship between the eye's optical quality and stereopsis. This instrument might serve for a better understanding of the relationship of binocular vision and stereopsis performance and the eye's aberrations.
International Nuclear Information System (INIS)
Jiang Baoguang; Cao Zhaoliang; Mu Quanquan; Hu Lifa; Li Chao; Xuan Li
2008-01-01
In order to obtain a clear image of the retina of model eye, an adaptive optics system used to correct the wave-front error is introduced in this paper. The spatial light modulator that we use here is a liquid crystal on a silicon device instead of a conversional deformable mirror. A paper with carbon granule is used to simulate the retina of human eye. The pupil size of the model eye is adjustable (3-7 mm). A Shack–Hartman wave-front sensor is used to detect the wave-front aberration. With this construction, a value of peak-to-valley is achieved to be 0.086 λ, where λ is wavelength. The modulation transfer functions before and after corrections are compared. And the resolution of this system after correction (691p/m) is very close to the dirraction limit resolution. The carbon granule on the white paper which has a size of 4.7 μm is seen clearly. The size of the retina cell is between 4 and 10 mu;m. So this system has an ability to image the human eye's retina. (classical areas of phenomenology)
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Energy Technology Data Exchange (ETDEWEB)
Klassen, Mikhail; Pudritz, Ralph E. [Department of Physics and Astronomy, McMaster University 1280 Main Street W, Hamilton, ON L8S 4M1 (Canada); Kuiper, Rolf [Max Planck Institute for Astronomy Königstuhl 17, D-69117 Heidelberg (Germany); Peters, Thomas [Institut für Computergestützte Wissenschaften, Universität Zürich Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Banerjee, Robi; Buntemeyer, Lars, E-mail: klassm@mcmaster.ca [Hamburger Sternwarte, Universität Hamburg Gojenbergsweg 112, D-21029 Hamburg (Germany)
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Carlson, Jared; Dominguez, Arturo; N/A Collaboration
2017-10-01
The PPPL Science Education Department, in collaboration with IPP, is currently developing a versatile small scale Stellarator for education and outreach purposes. The Princeton Adaptable Stellarator for Education and Outreach (PASEO) will provide visual demonstrations of Stellarator physics and serve as a lab platform for undergraduate and graduate students. Based off the Columbia Non-Neutral Torus (CNT) (1), and mini-CNTs (2), PASEO will create pure electron plasmas to study magnetic surfaces. PASEO uses similar geometries to these, but has an adjustable coil configuration to increase its versatility and conform to a highly visible vacuum chamber geometry. To simulate the magnetic surfaces in these new configurations, a MATALB code utilizing the Biot Savart law and a Fourth Order Runge-Kutta method was developed, leading to new optimal current ratios. The design for PASEO and its predicted plasma confinement are presented. (1) T.S. Pedersen et al., Fusion Science and Technology Vol. 46 July 2004 (2) C. Dugan, et al., American Physical Society; 48th Annual Meeting of the Division of Plasma Physics, October 30-November 3, 2006
Delle Site, Luigi
2018-01-01
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
International Nuclear Information System (INIS)
Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.
2011-01-01
Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.
[Adaptability of mangrove Avicennia marina seedlings to simulated tide-inundated times].
Liao, Bao-wen; Qiu, Feng-ying; Zhang, Liu-en; Han, Jing; Guan, Wei
2010-05-01
A laboratory test on the effects of differents simulated tide-inundated times with 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 h x d(-1) on the growth of Avicennia marina seedlings was conducted. The ten growth information indices including chlorophyll, root vigor, growth, biomass and photosynthetic rate were mensurated. The principal components analysis was made combining the ten growth information indices. The 210 d experimental results showed that the chlorophyll, root vigor, growth and biomass would rise first and then fall as the extension of the inundate time; and they changed suddenly at the threshold inundate time 16 h x d(-1). The growth and biomass of Avicennia marina seedlings with more than 16 hours tide-inundated time per day were less than them with no more than 16 hours tide-inundated time per day. The maximum value of stem increment each month, leaf blade increment each month, dry weight of stem, dry weight of root and total biomass were under the 10 hours tide-inundated time per day. It concluded that Avicennia marina seedlings would grow adaptively with less than 16 hours tide-inundated time per day, 8-12 hours of tide-inundated time per day is the most suitable for the growth of Avicennia marina seedlings, while 16 h x d(-1) is a critical tide-inundated time when the plant responded to be obviously inadaptable.
Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten
2017-07-01
We present a new framework for flexible testing of flood risk adaptation strategies in a variety of urban development and climate scenarios. This framework couples the 1D-2D hydrodynamic simulation package MIKE FLOOD with the agent-based urban development model DAnCE4Water and provides the possibility to systematically test various flood risk adaptation measures ranging from large infrastructure changes over decentralised water management to urban planning policies. We have tested the framework in a case study in Melbourne, Australia considering 9 scenarios for urban development and climate and 32 potential combinations of flood adaptation measures. We found that the performance of adaptation measures strongly depended on the considered climate and urban development scenario and the other implementation measures implemented, suggesting that adaptive strategies are preferable over one-off investments. Urban planning policies proved to be an efficient means for the reduction of flood risk, while implementing property buyback and pipe increases in a guideline-oriented manner was too costly. Random variations in location and time point of urban development could have significant impact on flood risk and would in some cases outweigh the benefits of less efficient adaptation strategies. The results of our setup can serve as an input for robust decision making frameworks and thus support the identification of flood risk adaptation measures that are economically efficient and robust to variations of climate and urban layout.
Field sampling scheme optimization using simulated annealing
CSIR Research Space (South Africa)
Debba, Pravesh
2010-10-01
Full Text Available optimal if there is (i) a reduction in the number of samples but resulting in esti- mates of population parameters of interest with the same or similar uncertainty, (ii) a reduc- tion in the variability or mean squared error in estimates of population... parameters of interest, (iii) a more correct distribution of samples representing the distribution of the population of interest, or a combination of these criteria. Development of optimal sampling requires a priori spatial information about a study area...
Directory of Open Access Journals (Sweden)
S. Vaidyanathan
2014-11-01
Full Text Available This research work describes a nine-term novel 3-D chaotic system with four quadratic nonlinearities and details its qualitative properties. The phase portraits of the 3-D novel chaotic system simulated using MATLAB, depict the strange chaotic attractor of the system. For the parameter values chosen in this work, the Lyapunov exponents of the novel chaotic system are obtained as L1 = 6.8548, L2 = 0 and L3 = −32.8779. Also, the Kaplan-Yorke dimension of the novel chaotic system is obtained as DKY = 2.2085. Next, an adaptive controller is design to achieve global stabilization of the 3-D novel chaotic system with unknown system parameters. Moreover, an adaptive controller is designed to achieve global chaos synchronization of two identical novel chaotic systems with unknown system parameters. Finally, an electronic circuit realization of the novel chaotic system is presented using SPICE to confirm the feasibility of the theoretical model.
One accelerated method for predicting thermal annealing effects in post-irradiation CMOS devices
International Nuclear Information System (INIS)
He Baoping; Zhou Heqin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Yao Zhibin
2005-01-01
A method for accelerated predictions of the long-term anneal effects was presented. In order to find the correspondence between two anneals time, our estimating conditions were that each isochronal step was equal to the duration of the isothermal anneal leading to the same level of charge detrapping. The long term isothermal behavior at 100 degree C and 24 degree C of the type CC4007 CMOS devices were predicted by using isochronal anneal data of 25-250 degree C and compared with an experimental isothermal. The authors note a good agreement between simulation and experiment. (authors)
Directory of Open Access Journals (Sweden)
Chao Wang
2016-08-01
Full Text Available In this article, an adaptive particle swarm optimization wavelet neural network with double sliding modes controller is proposed to address the complex nonlinearities and uncertainties in the electric load simulator. The adaptive double sliding modes–particle swarm optimization wavelet neural network algorithm with the self-learning structures and parameters is designed as a torque tracking controller, in which a number of hidden nodes are added and pruned by the structure learning algorithm, and the parameters are online adjusted by the adaptive particle swarm optimization at the same time. Moreover, one conventional sliding mode is introduced to track the time-varying reference command, and the other complementary sliding mode is adopted to attenuate the effect of the approximation error. Furthermore, the relative parameters should comply with some estimation laws on the basis of the Lyapunov theory used to guarantee the system stability. Finally, the simulation experiments are carried out on the hardware-in-the-loop platform for the electric load simulator, the performance of the adaptive double sliding modes–particle swarm optimization wavelet neural network with structure learning is verified compared with some similar control methods. In addition, different amplitudes and frequencies of the reference commands are introduced to further evaluate the effectiveness and robustness of the proposed algorithms.
Han, Fei
2016-05-17
The objective (mesh-independent) simulation of evolving discontinuities, such as cracks, remains a challenge. Current techniques are highly complex or involve intractable computational costs, making simulations up to complete failure difficult. We propose a framework as a new route toward solving this problem that adaptively couples local-continuum damage mechanics with peridynamics to objectively simulate all the steps that lead to material failure: damage nucleation, crack formation and propagation. Local-continuum damage mechanics successfully describes the degradation related to dispersed microdefects before the formation of a macrocrack. However, when damage localizes, it suffers spurious mesh dependency, making the simulation of macrocracks challenging. On the other hand, the peridynamic theory is promising for the simulation of fractures, as it naturally allows discontinuities in the displacement field. Here, we present a hybrid local-continuum damage/peridynamic model. Local-continuum damage mechanics is used to describe “volume” damage before localization. Once localization is detected at a point, the remaining part of the energy is dissipated through an adaptive peridynamic model capable of the transition to a “surface” degradation, typically a crack. We believe that this framework, which actually mimics the real physical process of crack formation, is the first bridge between continuum damage theories and peridynamics. Two-dimensional numerical examples are used to illustrate that an objective simulation of material failure can be achieved by this method.
Evidence for adaptive response and implication in pulse-simulated low-dose-rate radiotherapy
International Nuclear Information System (INIS)
Raaphorst, G.P.; Ng, C.E.; Smith, D.; Niedbala, M.
2000-01-01
Purpose: Pulsed-dose-rate (PDR) brachytherapy as a substitute for continuous low-dose-rate (LDR) brachytherapy has a number of clinical advantages. However, early results show that some cells can exhibit an adaptive response to radiation and in PDR where many pulses are given such an adaptive response may play an important role in the outcome. Methods and Materials: Nine human cell lines (two normal fibroblast and seven tumor) were evaluated for an adaptive response. Cells were given either a single adapting dose before a challenge dose or given PDR sequences for which the average dose rate matched the LDR dose rate. Response was assessed using the colony survival assay. Results: Five of the nine cell lines showed an adapting response to single small doses of radiation. Three of these cell lines were further investigated for adapting response to PDR and two of the three lines (one ovarian carcinoma and one glioma) showed an adaptive response which was dependent on pulse size and interval. Conclusion: The data show that an adaptive response can occur in human cells and that it can vary among cell lines. In addition, PDR sequences also produced an adaptive response which could have an affect on PDR therapy if such a response is found in tissues
DEFF Research Database (Denmark)
Arnbjerg-Nielsen, Karsten; Leonhardsen, Lykke; Madsen, Henrik
2014-01-01
Climate change adaptation studies on urban flooding are often based on a model chain approach from climate forcing scenarios to analysis of adaptation measures. Previous analyses of impacts in Denmark using ensemble projections of the A1B scenario are supplemented by two high‐end scenario...... simulations. These include a regional climate model projection forced to a global temperature increase of 6 degrees as well as a projection based on the RCP8.5 scenario. With these scenarios projected impacts of extreme precipitation increase significantly. For extreme sea surges the impacts do not seem...... to change substantially. The impacts are assessed using Copenhagen as a case study. For both types of extremes large adaptation measures are essential in the global six degree scenario; dikes must be constructed to mitigate sea surge risk and a variety of measures to store or convey storm water must...
Influence of alloying and secondary annealing on anneal hardening ...
Indian Academy of Sciences (India)
Unknown
Abstract. This paper reports results of investigation carried out on sintered copper alloys (Cu, 8 at%; Zn,. Ni, Al and Cu–Au with 4 at%Au). The alloys were subjected to cold rolling (30, 50 and 70%) and annealed isochronally up to recrystallization temperature. Changes in hardness and electrical conductivity were fol-.
Management of the Bohunice RPVs annealing procedures
International Nuclear Information System (INIS)
Repka, M.
1994-01-01
The program of annealing regeneration procedure of RPVs units 1 and 2 of NPP V-1 (EBO) realization in the year 1993, is the topic of this paper. In the paper the following steps are described in detail: the preparation works, the annealing procedure realization schedule and safety management: starting with zero conditions, assembling of annealing apparatus, annealing procedure, cooling down and disassembling procedure of annealing apparatus. At the end the programs of annealing of both RPVs including the dosimetry measurements are discussed and evaluated. (author). 3 figs
International Nuclear Information System (INIS)
Bhaskoro, Petrus Tri; Gilani, Syed Ihtsham Ul Haq; Aris, Mohd Shiraz
2013-01-01
Highlights: • We have simulated and validated the cooling loads of a multi-zone academic building, in a tropical region. • We have analyzed the effect of occupancy patterns on the cooling loads. • Adaptive cooling technique has been utilized to minimize the energy usage of HVAC system. • The results are promising and show a reduction of energy saving in the range of 20–30%. - Abstract: Application of adaptive comfort temperature as room temperature set points potentially reduce energy usage of the HVAC system during a cooling and heating period. The savings are mainly due to higher indoor temperature set point during hot period and lower indoor temperature set point during cold period than the recommended value. Numerous works have been carried out to show how much energy can be saved during cooling and heating period by applying adaptive comfort temperature. The previous work, however, focused on a continuous cooling load as found in many office and residential buildings. Therefore, this paper aims to simulate the energy saving potential for an academic glazed building in tropical Malaysian climate by developing adaptive cooling technique. A building simulation program (TRNSYS) was used to model the building and simulate the cooling load characteristic using current and proposed technique. Two experimental measurements were conducted and the results were used to validate the model. Finally, cooling load characteristic of the academic building using current and proposed technique were compared and the results showed that annual energy saving potential as much as 305,150 kW h can be achieved
DEFF Research Database (Denmark)
Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels
2013-01-01
In this paper, single wake characteristics have been studied both experimentally and numerically. Firstly, the wake is studied experimentally using full-scale measurements from an adapted focused pulsed lidar system, which potentially gives more insight into the wake dynamics as compared to class...... using the EllipSys3D flow solver using Large Eddy Simulation (LES) and Actuator Line Technique (ACL) to model the rotor. Discrepancies due to the uncertainties on the wake advection velocity are observed and discussed....
DEFF Research Database (Denmark)
In this paper, single wake characteristics have been studied both experimentally and numerically. Firstly, the wake is studied experimentally using full-scale measurements from an adapted focused pulsed lidar system, which potentially gives more insight into the wake dynamics as compared to class...... using the EllipSys3D flow solver using Large Eddy Simulation (LES) and Actuator Line Technique (ACL) to model the rotor. Discrepancies due to the uncertainties on the wake advection velocity are observed and discussed....
Arandjelović, Ognjen
2010-10-01
A large corpus of data obtained by means of empirical study of neuromuscular adaptation is currently of limited use to athletes and their coaches. One of the reasons lies in the unclear direct practical utility of many individual trials. This paper introduces a mathematical model of adaptation to resistance training, which derives its elements from physiological fundamentals on the one side, and empirical findings on the other. The key element of the proposed model is what is here termed the athlete's capability profile. This is a generalization of length and velocity dependent force production characteristics of individual muscles, to an exercise with arbitrary biomechanics. The capability profile, a two-dimensional function over the capability plane, plays the central role in the proposed model of the training-adaptation feedback loop. Together with a dynamic model of resistance the capability profile is used in the model's predictive stage when exercise performance is simulated using a numerical approximation of differential equations of motion. Simulation results are used to infer the adaptational stimulus, which manifests itself through a fed back modification of the capability profile. It is shown how empirical evidence of exercise specificity can be formulated mathematically and integrated in this framework. A detailed description of the proposed model is followed by examples of its application-new insights into the effects of accommodating loading for powerlifting are demonstrated. This is followed by a discussion of the limitations of the proposed model and an overview of avenues for future work.
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
Directory of Open Access Journals (Sweden)
Davide Michetti
Full Text Available The psychrophilic and mesophilic endonucleases A (EndA from Aliivibrio salmonicida (VsEndA and Vibrio cholera (VcEndA have been studied experimentally in terms of the biophysical properties related to thermal adaptation. The analyses of their static X-ray structures was no sufficient to rationalize the determinants of their adaptive traits at the molecular level. Thus, we used Molecular Dynamics (MD simulations to compare the two proteins and unveil their structural and dynamical differences. Our simulations did not show a substantial increase in flexibility in the cold-adapted variant on the nanosecond time scale. The only exception is a more rigid C-terminal region in VcEndA, which is ascribable to a cluster of electrostatic interactions and hydrogen bonds, as also supported by MD simulations of the VsEndA mutant variant where the cluster of interactions was introduced. Moreover, we identified three additional amino acidic substitutions through multiple sequence alignment and the analyses of MD-based protein structure networks. In particular, T120V occurs in the proximity of the catalytic residue H80 and alters the interaction with the residue Y43, which belongs to the second coordination sphere of the Mg2+ ion. This makes T120V an amenable candidate for future experimental mutagenesis.
Pulsed Laser Annealing of Carbon
Abrahamson, Joseph P.
This dissertation investigates laser heating of carbon materials. The carbon industry has been annealing carbon via traditional furnace heating since at least 1800, when Sir Humphry Davy produced an electric arc with carbon electrodes made from carbonized wood. Much knowledge has been accumulated about carbon since then and carbon materials have become instrumental both scientifically and technologically. However, to this day the kinetics of annealing are not known due to the slow heating and cooling rates of furnaces. Additionally, consensus has yet to be reached on the cause of nongraphitizability. Annealing trajectories with respect to time at temperature are observed from a commercial carbon black (R250), model graphitizable carbon (anthracene coke) and a model nongraphitizable carbon (sucrose char) via rapid laser heating. Materials were heated with 1064 nm and 10.6 im laser radiation from a Q-switched Nd:YAG laser and a continuous wave CO2 laser, respectively. A pulse generator was used reduce the CO2 laser pulse width and provide high temporal control. Time-temperature-histories with nanosecond temporal resolution and temperature reproducibility within tens of degrees Celsius were determined by spectrally resolving the laser induced incandescence signal and applying multiwavelength pyrometry. The Nd:YAG laser fluences include: 25, 50, 100, 200, 300, and 550 mJ/cm2. The maximum observed temperature ranged from 2,400 °C to the C2 sublimation temperature of 4,180 °C. The CO2 laser was used to collect a series of isothermal (1,200 and 2,600 °C) heat treatments versus time (100 milliseconds to 30 seconds). Laser heated samples are compared to furnace annealing at 1,200 and 2,600 °C for 1 hour. The material transformation trajectory of Nd:YAG laser heated carbon is different than traditional furnace heating. The traditional furnace annealing pathway is followed for CO2 laser heating as based upon equivalent end structures. The nanostructure of sucrose char
Chaotic Multiquenching Annealing Applied to the Protein Folding Problem
Directory of Open Access Journals (Sweden)
Juan Frausto-Solis
2014-01-01
Full Text Available The Chaotic Multiquenching Annealing algorithm (CMQA is proposed. CMQA is a new algorithm, which is applied to protein folding problem (PFP. This algorithm is divided into three phases: (i multiquenching phase (MQP, (ii annealing phase (AP, and (iii dynamical equilibrium phase (DEP. MQP enforces several stages of quick quenching processes that include chaotic functions. The chaotic functions can increase the exploration potential of solutions space of PFP. AP phase implements a simulated annealing algorithm (SA with an exponential cooling function. MQP and AP are delimited by different ranges of temperatures; MQP is applied for a range of temperatures which goes from extremely high values to very high values; AP searches for solutions in a range of temperatures from high values to extremely low values. DEP phase finds the equilibrium in a dynamic way by applying least squares method. CMQA is tested with several instances of PFP.
Guzman, Diego; Mohor, Guilherme; Câmara, Clarissa; Mendiondo, Eduardo
2017-04-01
Researches from around the world relate global environmental changes with the increase of vulnerability to extreme events, such as heavy and scarce precipitations - floods and droughts. Hydrological disasters have caused increasing losses in recent years. Thus, risk transfer mechanisms, such as insurance, are being implemented to mitigate impacts, finance the recovery of the affected population, and promote the reduction of hydrological risks. However, among the main problems in implementing these strategies, there are: First, the partial knowledge of natural and anthropogenic climate change in terms of intensity and frequency; Second, the efficient risk reduction policies require accurate risk assessment, with careful consideration of costs; Third, the uncertainty associated with numerical models and input data used. The objective of this document is to introduce and discuss the feasibility of the application of Hydrological Risk Transfer Models (HRTMs) as a strategy of adaptation to global climate change. The article shows the development of a methodology for the collective and multi-sectoral vulnerability management, facing the hydrological risk in the long term, under an insurance funds simulator. The methodology estimates the optimized premium as a function of willingness to pay (WTP) and the potential direct loss derived from hydrological risk. The proposed methodology structures the watershed insurance scheme in three analysis modules. First, the hazard module, which characterizes the hydrologic threat from the recorded series input or modelled series under IPCC / RCM's generated scenarios. Second, the vulnerability module calculates the potential economic loss for each sector1 evaluated as a function of the return period "TR". Finally, the finance module determines the value of the optimal aggregate premium by evaluating equiprobable scenarios of water vulnerability; taking into account variables such as the maximum limit of coverage, deductible
PROGRAM OF EXERCISING WITH WEIGHTS AND SIMULATORS BY STATION METHOD ADAPTED TO FEMALES
Directory of Open Access Journals (Sweden)
Nebojša Čokorilo
2011-03-01
Full Text Available Facility for exercising with weights, popularly called 'the gym'', is, in the opinion of majority of women, a place designed for men, a facility adapted to their needs. Also, they have a fear of losing their feminine characteristics, by increasing the muscular volume. In a word, they are afraid of effects opposite to those they expect to achieve with regular physical exercising. It is necessary to evade the “blind” copying of the trainings made exclusively for men onto women. On the other hand, when it is measured in terms force/cm², the area of cross-section, the muscle a woman can achieve almost the same maximal force as man’s muscle – which is 3 and 4 kg/cm². Because of that, the greatest difference in total quality of muscles lies in additional percentages of the muscles of the male body, which is explained with endocrinal differences. Hormone differences between men and women are certainly the underlying cause of majority, if not, all differences in sport abilities. This model of exercising with weights and simulators was made taking into account the specific features of female exercising. Exercising is performed in the zone of the mid-load. Mid-load is calculated by defining the weight of the “load” for each exercise and examinee separately. Load varied within the range of 45% to 70% of the maximal one (under the term maximal, we mean maximal load that a gymnast manages to cope, making a specific move which is achieved by one repetition only. In the very beginning, and if we work with the total beginners, application of such way of load measuring is not recommendable, because some unfavourable effects might be caused. Due to that, such way of load calculating is delayed for the second month, while for the first month and based on the body value, each gymnast gets the weight 30% - 50% of load of her body weight. It all depends on their initial ability as well as the type of each specific exercise. The aim is to adjust the load
Morphological, thermal and annealed microhardness ...
Indian Academy of Sciences (India)
Unknown
blended with sugarcane bagasse which showed good me- chanical properties when investigated by SEM, thermal gravimetric analysis (TGA), DSC and tensile testing (Chiel- lini et al 2001). An increase in the engineering yield stress was observed, with a decline in tensile impact strength. With DSC on annealing, a small ...
Morphological, thermal and annealed microhardness ...
Indian Academy of Sciences (India)
The present paper reports the preparation of full IPNs of gelatin and polyacrylonitrile. Various compositions of gluteraldehyde crosslinked gelatin and N,N′-methylene-bis-acrylamide crosslinked PAN were characterized by SEM and DSC techniques. The IPNs were also thermally pretreated by the annealing process.
Directory of Open Access Journals (Sweden)
Quintiliano Siqueira Schroden Nomelini
2009-12-01
Full Text Available Um mapa genético é um diagrama onde são representados os genes com suas respectivas posições no cromossomo. Eles são essenciais para o procedimento de localização de genes envolvidos no controle genético de caracteres quantitativos ou no controle de outros caracteres de interesse econômico. No presente trabalho avalia-se, via simulação computacional de dados, a eficiência dos algoritmos simulated annealing, delineação rápida em cadeia e ramos e conexões, para a construção de mapas genéticos. Nas condições avaliadas, o algoritmo ramos e conexões foi o mais rápido, sendo que tanto este, quanto a delineação rápida em cadeia apresentaram 100% de eficiência. A eficiência do simulated annealing para ordenação de marcadores variou com o número de marcadores, para 5 e 10 foi de 100%, para 15 99,8% e com 20 marcadores a eficiência obtida foi de 99,2%.The efficiency of Simulated Annealing (SA, Rapid Chain Delineation (RCD and Branch and Bounds (BB algorithms was evaluated by a Monte Carlo method. Regarding the conditions appraised the Branch and Bounds showed to be the fastest among them. Both RCD and BB were 100% efficient. The efficiency of SA depends on the length of the linkage group to be ordered. For 5 and 10 the efficiency was 100%, for 15 it was 99.8% and for 20 it was 99.2%.
Directory of Open Access Journals (Sweden)
Dębski Roman
2016-06-01
Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.
International Nuclear Information System (INIS)
Ristic, G.F.; Jaksic, A.B.; Pejovic, M.M.
1999-01-01
The paper presents new experimental evidence of the latent interface-trap buildup during annealing of gamma-ray irradiated power VDMOSFETs. We try to reveal the nature of this still ill-understood phenomenon by isothermal annealing, switching temperature annealing and switching bias annealing experiments. The results of numerical simulation of interface-trap kinetics during annealing are also shown. (authors)
A coherent quantum annealer with Rydberg atoms
Glaetzle, A. W.; van Bijnen, R. M. W.; Zoller, P.; Lechner, W.
2017-06-01
There is a significant ongoing effort in realizing quantum annealing with different physical platforms. The challenge is to achieve a fully programmable quantum device featuring coherent adiabatic quantum dynamics. Here we show that combining the well-developed quantum simulation toolbox for Rydberg atoms with the recently proposed Lechner-Hauke-Zoller (LHZ) architecture allows one to build a prototype for a coherent adiabatic quantum computer with all-to-all Ising interactions and, therefore, a platform for quantum annealing. In LHZ an infinite-range spin-glass is mapped onto the low energy subspace of a spin-1/2 lattice gauge model with quasi-local four-body parity constraints. This spin model can be emulated in a natural way with Rubidium and Caesium atoms in a bipartite optical lattice involving laser-dressed Rydberg-Rydberg interactions, which are several orders of magnitude larger than the relevant decoherence rates. This makes the exploration of coherent quantum enhanced optimization protocols accessible with state-of-the-art atomic physics experiments.
Bhamidipati, S.K.
2014-01-01
An asset management framework, in an agent-based model with multiple assets, is presented as a tool that can assist in developing long-term climate change adaptation strategies for transportation infrastructure.
National Research Council Canada - National Science Library
McRae, D. S; Xiao, Xudong; Hassan, Hassan A
2005-01-01
Development of the North Carolina State University (NCSU) adaptive high-resolution atmospheric model and the atmospheric version of the NCSU k-zeta turbulence model continued during this contract period...
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
International Nuclear Information System (INIS)
Zhao Gongbo; Koyama, Kazuya; Li Baojiu
2011-01-01
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
Energy Technology Data Exchange (ETDEWEB)
Rasia, Elena [Department of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109 (United States); Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille [Department of Physics, Yale University, New Haven, CT 06520 (United States); Borgani, Stefano [Dipartimento di Fisica dell' Università di Trieste, Sezione di Astronomia, via Tiepolo 11, I-34131 Trieste (Italy); Dolag, Klaus [University Observatory Munich, Scheiner-Str. 1, D-81679 Munich (Germany); Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia [INAF, Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131, Trieste (Italy); Mazzotta, Pasquale [Dipartimento di Fisica, Università di Roma Tor Vergata, via della Ricerca Scientifica, I-00133, Roma (Italy); Nelson, Kaylea, E-mail: rasia@umich.edu [Department of Astronomy, Yale University, New Haven, CT 06520 (United States)
2014-08-20
Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.
A Hardware-Accelerated Fast Adaptive Vortex-Based Flow Simulation Software, Phase I
National Aeronautics and Space Administration — Applied Scientific Research has recently developed a Lagrangian vortex-boundary element method for the grid-free simulation of unsteady incompressible...
International Nuclear Information System (INIS)
Su Jie; Xia Guoqing; Zhang Wei
2007-01-01
For further improving the dynamic control capabilities of the gas turbine of the nuclear power plant, this paper puts forward to apply the algorithm of global predictive control with self-adaptive in the rotate speed control of the gas turbine, including control structure and the design of controller in the base of expounding the math model of the gas turbine of the nuclear power plant. the simulation results show that the respond of the change of the gas turbine speed under the control algorithm of global predictive control with self-adaptive is ten second faster than that under the PID control algorithm, and the output value of the gas turbine speed under the PID control algorithm is 1%-2% higher than that under the control slgorithm of global predictive control with self-adaptive. It shows that the algorithm of global predictive control with self-adaptive can better control the output of the speed of the gas turbine of the nuclear power plant and get the better control effect. (authors)
High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering
Energy Technology Data Exchange (ETDEWEB)
Solin, Pavel [Univ. of Reno, NV (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)
2014-03-09
We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.
Shi, Zhenzhen; Wu, Chih-Hang J; Ben-Arieh, David; Simpson, Steven Q
2015-01-01
Sepsis is a systemic inflammatory response (SIR) to infection. In this work, a system dynamics mathematical model (SDMM) is examined to describe the basic components of SIR and sepsis progression. Both innate and adaptive immunities are included, and simulated results in silico have shown that adaptive immunity has significant impacts on the outcomes of sepsis progression. Further investigation has found that the intervention timing, intensity of anti-inflammatory cytokines, and initial pathogen load are highly predictive of outcomes of a sepsis episode. Sensitivity and stability analysis were carried out using bifurcation analysis to explore system stability with various initial and boundary conditions. The stability analysis suggested that the system could diverge at an unstable equilibrium after perturbations if r t2max (maximum release rate of Tumor Necrosis Factor- (TNF-) α by neutrophil) falls below a certain level. This finding conforms to clinical findings and existing literature regarding the lack of efficacy of anti-TNF antibody therapy.
DEFF Research Database (Denmark)
Arnbjerg-Nielsen, Karsten; Leonardsen, L.; Madsen, Henrik
2015-01-01
Climate change adaptation studies on urban flooding are often based on a model chain approach from climate forcing scenarios to analysis of adaptation measures. Previous analyses of climate change impacts in Copenhagen, Denmark, were supplemented by 2 high-end scenario simulations. These include...... a regional climate model projection forced to a global temperature increase of 6 degrees C in 2100 as well as a projection based on a high radiative forcing scenario (RCP8.5). With these scenarios, projected impacts of extreme precipitation increase significantly. For extreme sea surges, the impacts do...... not seem to change substantially compared to currently applied projections. The flood risk (in terms of expected annual damage, EAD) from sea surge is likely to increase by more than 2 orders of magnitude in 2100 compared to the present cost. The risk from pluvial flooding in 2000 is likely to increase...
Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation
Energy Technology Data Exchange (ETDEWEB)
Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL
2009-12-01
To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are not the necessary requirements for insurgents to efficiently attain their objective.
Propagating self-sustained annealing of radiation-induced interstitial complexes
International Nuclear Information System (INIS)
Bokov, P M; Selyshchev, P A
2016-01-01
A propagating self-sustained annealing of radiation induced defects as a result of thermal-concentration instability is studied. The defects that are considered in the model are complexes. Each of them consists of one atom of impunity and of one interstitial atom. Crystal with defects has extra energy which is transformed into heat during defect annealing. Simulation of the auto-wave of annealing has been performed. The front and the speed of the auto-wave have been obtained. It is shown that annealing occurs in a narrow region of time and space. There are two kinds of such annealing behaviour. In the first case the speed of the auto-wave oscillates near its constant mean value and the front of temperature oscillates in a complex way. In the second case the speed of propagation is constant and fronts of temperature and concentration look like sigmoid functions. (paper)
Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation
He, Yuning; Lee, Herbert K. H.; Davies, Misty D.
2012-01-01
Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.
Modeling and Simulation of An Adaptive Neuro-Fuzzy Inference System (ANFIS) for Mobile Learning
Al-Hmouz, A.; Shen, Jun; Al-Hmouz, R.; Yan, Jun
2012-01-01
With recent advances in mobile learning (m-learning), it is becoming possible for learning activities to occur everywhere. The learner model presented in our earlier work was partitioned into smaller elements in the form of learner profiles, which collectively represent the entire learning process. This paper presents an Adaptive Neuro-Fuzzy…
Struijs, J.; van de Meent, D.; Schowanek, D.; Buchholz, H.; Patoux, R.; Wolf, T.; Austin, T.; Tolls, J.; van Leeuwen, K.; Galay-Burgos, M.
2016-01-01
The multimedia model SimpleTreat, evaluates the distribution and elimination of chemicals by municipal sewage treatment plants (STP). It is applied in the framework of REACH (Registration, Evaluation, Authorization and Restriction of Chemicals). This article describes an adaptation of this model for
Mulder, L. J. M.; Dijksterhuis, C.; Stuiver, A.; de Waard, D.
2009-01-01
Adaptive support has the potential to keep the operator optimally motivated, involved, and able to perform a task. in order to use such support, the operator's state has to be determined from physiological parameters and task performance measures. In an environment where the task of an ambulance
Barnaud, Cecile; Promburom, Tanya; Trebuil, Guy; Bousquet, Francois
2007-01-01
The decentralization of natural resource management provides an opportunity for communities to increase their participation in related decision making. Research should propose adapted methodologies enabling the numerous stakeholders of these complex socioecological settings to define their problems and identify agreed-on solutions. This article…
Adaptive mesh simulations of astrophysical detonations using the ASCI flash code
Fryxell, B.; Calder, A. C.; Dursi, L. J.; Lamb, D. Q.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Truran, J. W.; Tufo, H. M.; Zingale, M.
2001-08-01
The Flash code was developed at the University of Chicago as part of the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The code was designed specifically to simulate thermonuclear flashes in compact stars (white dwarfs and neutron stars). This paper will give a brief introduction to the astrophysics problems we wish to address, followed by a description of the current version of the Flash code. Finally, we discuss two simulations of astrophysical detonations that we have carried out with the code. The first is of a helium detonation in an X-ray burst. The other simulation models a carbon detonation in a Type Ia supernova explosion. .
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
Energy Technology Data Exchange (ETDEWEB)
Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-05-01
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings
Adaptive smart simulator for characterization and MPPT construction of PV array
Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel
2016-07-01
Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.
Adaptive smart simulator for characterization and MPPT construction of PV array
Energy Technology Data Exchange (ETDEWEB)
Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah [Electromechanical engineering department, Electromechanical engineering laboratory, Badji Mokhtar University, B.P. 12, Annaba (Algeria); Dib, Djalel [Department of Electrical Engineering, University of Tebessa, Tebessa (Algeria)
2016-07-25
Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.
Oglesby, R. J.; Rowe, C. M.; Hays, C.
2012-12-01
High-resolution (4-12 km) dynamical downscaling simulations of future climate change between now and 2060 have been made for Mesoamerica and the Caribbean. We use the Weather Research and Forecasting (WRF) regional climate model to downscale results from the NCAR CCSM4 CMIP5 RCP8.5 global simulation. The entire region is covered at 12 km horizontal spatial resolution, with as much as possible (especially in mountainous regions) at 4 km. We compare a control period (2006-2010) with 50 years into the future (2056-2060). The motivation for making these computationally-demanding model simulations is to better define local and regional climate change effects so as to better identify and quantify impacts and associated vulnerabilities. This is an essential precursor to developing robust adaptation strategies. These simulations have been made in conjunction with our partners from the countries involved. As expected, all areas warm, with the warming in general largest in inland regions, and less towards coastal regions. Higher elevation regions also tend to warm somewhat more than lower elevation regions, a result that could not be reliably obtained, in detail, from coarse-scale global models. The precipitation signal is much more mixed, and demonstrates more clearly the need for high resolution. The effects of changes in the large-scale trade wind regime tend to be restricted to the immediate Atlantic coast, while the interior is less-well posed, with some indication of a northward shift in precipitation regime, due to changes both in the large-scale ITCZ, and the regional scale Caribbean and Gulf of Mexico low-level jets. Topographic resolution continues to play a key role. The new results are currently being used by both climate scientists and policy makers to evaluate vulnerabilities, and hence develop adaptation strategies for the affected countries.
Energy Technology Data Exchange (ETDEWEB)
Visbal, Jorge H. Wilches; Costa, Alessandro M., E-mail: jhwilchev@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil)
2016-07-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Energy Technology Data Exchange (ETDEWEB)
Visbal, Jorge H. Wilches; Costa, Alessandro M., E-mail: jhwilchev@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto (USP), SP (Brazil). Faculdade de Filosofia, Ciencias e Letras
2016-07-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the energy spectrum of electrons should be considered to improve the accuracy of dose calculation, because the electron beams that reach the surface traveling through internal structures of accelerator are not in fact monoenergetic. There are three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work, it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo
2018-01-18
The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.
Energy Technology Data Exchange (ETDEWEB)
Ghobadi, Ahmadreza F.; Elliott, J. Richard, E-mail: elliot1@uakron.edu [Department of Chemical and Biomolecular Engineering, The University of Akron, Akron, Ohio 44325 (United States)
2013-12-21
In this work, we aim to develop a version of the Statistical Associating Fluid Theory (SAFT)-γ equation of state (EOS) that is compatible with united-atom force fields, rather than experimental data. We rely on the accuracy of the force fields to provide the relation to experimental data. Although, our objective is a transferable theory of interfacial properties for soft and fused heteronuclear chains, we first clarify the details of the SAFT-γ approach in terms of site-based simulations for homogeneous fluids. We show that a direct comparison of Helmholtz free energy to molecular simulation, in the framework of a third order Weeks-Chandler-Andersen perturbation theory, leads to an EOS that takes force field parameters as input and reproduces simulation results for Vapor-Liquid Equilibria (VLE) calculations. For example, saturated liquid density and vapor pressure of n-alkanes ranging from methane to dodecane deviate from those of the Transferable Potential for Phase Equilibria (TraPPE) force field by about 0.8% and 4%, respectively. Similar agreement between simulation and theory is obtained for critical properties and second virial coefficient. The EOS also reproduces simulation data of mixtures with about 5% deviation in bubble point pressure. Extension to inhomogeneous systems and united-atom site types beyond those used in description of n-alkanes will be addressed in succeeding papers.
Ghobadi, Ahmadreza F; Elliott, J Richard
2013-12-21
In this work, we aim to develop a version of the Statistical Associating Fluid Theory (SAFT)-γ equation of state (EOS) that is compatible with united-atom force fields, rather than experimental data. We rely on the accuracy of the force fields to provide the relation to experimental data. Although, our objective is a transferable theory of interfacial properties for soft and fused heteronuclear chains, we first clarify the details of the SAFT-γ approach in terms of site-based simulations for homogeneous fluids. We show that a direct comparison of Helmholtz free energy to molecular simulation, in the framework of a third order Weeks-Chandler-Andersen perturbation theory, leads to an EOS that takes force field parameters as input and reproduces simulation results for Vapor-Liquid Equilibria (VLE) calculations. For example, saturated liquid density and vapor pressure of n-alkanes ranging from methane to dodecane deviate from those of the Transferable Potential for Phase Equilibria (TraPPE) force field by about 0.8% and 4%, respectively. Similar agreement between simulation and theory is obtained for critical properties and second virial coefficient. The EOS also reproduces simulation data of mixtures with about 5% deviation in bubble point pressure. Extension to inhomogeneous systems and united-atom site types beyond those used in description of n-alkanes will be addressed in succeeding papers.
An adaptive simulation model for analysis of nuclear material shipping operations
International Nuclear Information System (INIS)
Boerigter, S.T.; Sena, D.J.; Fasel, J.H.
1998-01-01
Los Alamos has developed an advanced simulation environment designed specifically for nuclear materials operations. This process-level simulation package, the Process Modeling System (ProMoS), is based on high-fidelity material balance criteria and contains intrinsic mechanisms for waste and recycle flows, contaminant estimation and tracking, and material-constrained operations. Recent development efforts have focused on coupling complex personnel interactions, personnel exposure calculations, and stochastic process-personnel performance criteria to the material-balance simulation. This combination of capabilities allows for more realistic simulation of nuclear material handling operations where complex personnel interactions are required. They have used ProMoS to assess fissile material shipping performance characteristics at the Los Alamos National Laboratory plutonium facility (TA-55). Nuclear material shipping operations are ubiquitous in the DOE complex and require the largest suite of varied personnel interacting in a well-timed manner to accomplish the task. They have developed a baseline simulation of the present operations and have estimated the operational impacts and requirement of the pit production mission at TA-55 as a result of the SSM-PEIS. Potential bottlenecks have been explored and mechanisms for increasing operational efficiency are identified
An adaptive simulation model for analysis of nuclear material shipping operations
Energy Technology Data Exchange (ETDEWEB)
Boerigter, S.T.; Sena, D.J.; Fasel, J.H.
1998-12-31
Los Alamos has developed an advanced simulation environment designed specifically for nuclear materials operations. This process-level simulation package, the Process Modeling System (ProMoS), is based on high-fidelity material balance criteria and contains intrinsic mechanisms for waste and recycle flows, contaminant estimation and tracking, and material-constrained operations. Recent development efforts have focused on coupling complex personnel interactions, personnel exposure calculations, and stochastic process-personnel performance criteria to the material-balance simulation. This combination of capabilities allows for more realistic simulation of nuclear material handling operations where complex personnel interactions are required. They have used ProMoS to assess fissile material shipping performance characteristics at the Los Alamos National Laboratory plutonium facility (TA-55). Nuclear material shipping operations are ubiquitous in the DOE complex and require the largest suite of varied personnel interacting in a well-timed manner to accomplish the task. They have developed a baseline simulation of the present operations and have estimated the operational impacts and requirement of the pit production mission at TA-55 as a result of the SSM-PEIS. Potential bottlenecks have been explored and mechanisms for increasing operational efficiency are identified.
Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth
2014-09-15
In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
McCarthy Jaki
2017-09-01
Full Text Available Nonresponse rates have been growing over time leading to concerns about survey data quality. Adaptive designs seek to allocate scarce resources by targeting specific subsets of sampled units for additional effort or a different recruitment protocol. In order to be effective in reducing nonresponse, the identified subsets of the sample need two key features: 1 their probabilities of response can be impacted by changing design features, and 2 once they have responded, this can have an impact on estimates after adjustment. The National Agricultural Statistics Service (NASS is investigating the use of adaptive design techniques in the Crops Acreage, Production, and Stocks Survey (Crops APS. The Crops APS is a survey of establishments which vary in size and, hence, in their potential impact on estimates. In order to identify subgroups for targeted designs, we conducted a simulation study that used Census of Agriculture (COA data as proxies for similar survey items. Different patterns of nonresponse were simulated to identify subgroups that may reduce estimated nonresponse bias when their response propensities are changed.
DEFF Research Database (Denmark)
Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove
2010-01-01
The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete....... Current reservoir simulators apply timestepping algorithms that are based on safeguarded heuristics, and can neither guarantee convergence in the underlying equation solver, nor provide estimates of the relations between convergence, integration error and stepsizes. We establish predictive stepsize...... control applied to high order methods for temporal discretization in reservoir simulation. The family of Runge-Kutta methods is presented and in particular the explicit singly diagonally implicit Runge-Kutta (ESDIRK) method with an embedded error estimate is described. A predictive stepsize adjustment...
DEFF Research Database (Denmark)
Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove
2010-01-01
The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete...
Matthews, R.B.; Kropff, M.J.; Horie, T.; Bachelet, D.
1997-01-01
The likely effects of climate change caused by increasing atmospheric carbon dioxide levels on rice production in Asia were evaluated using two rice crop simulation models, ORYZA1 and SIMRIW, running under fixed-change' climate scenarios and scenarios predicted for a doubled-CO2 (2xCO2) atmosphere
Evolution in Lego. A physical simulation of adaptation by natural selection
DEFF Research Database (Denmark)
Christensen-Dalsgaard, Jakob; Kanneworff, Morten
2009-01-01
En simulering af mekanismen bag naturlig udvælgelse ved at bruge legorger, organismer bygget af 6 legoklodser, og med en genetisk kode, der bestemmer hvordan klodserne er placeret og dermed formen på dyret. Legorgerne tildeles fitness efter, hvor langt de kan bevæge sig. Efter fem generationers s...
Loviisa Unit One: Annealing - healing
Energy Technology Data Exchange (ETDEWEB)
Kohopaeae, J.; Virsu, R. [ed.; Henriksson, A. [ed.
1997-11-01
Unit 1 of the Loviisa nuclear powerplant was annealed in connection with the refuelling outage in the summer of 1996. This type of heat treatment restored the toughness properties of the pressure vessel weld, which had been embrittled be neutron radiation, so that it is almost equivalent to a new weld. The treatment itself was an ordinary metallurgical procedure that took only a few days. But the material studies that preceded it began over fifteen years ago and have put IVO at the forefront of world-wide expertise in the area of radiation embrittlement
DEFF Research Database (Denmark)
Maghareh, Amin; Waldbjørn, Jacob Paamand; Dyke, Shirley J.
2016-01-01
frequencies and/or introduce delays that can degrade its stability and performance. In this study, the Adaptive Multi-rate Interface rate-transitioning and compensation technique is developed to enable the use of more complex numerical models. Such a multi-rate RTHS is strictly executed at real-time, although...... frequency between the numerical and physical substructures and for input signals with high-frequency content. Further, it does not induce signal chattering at the coupling frequency. The effectiveness of AMRI is also verified experimentally....
Kou, Jisheng
2014-01-01
The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.
High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems
Abdelfattah, Ahmad
2014-01-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.
Adaptive fuzzy control for a simulation of hydraulic analogy of a nuclear reactor
International Nuclear Information System (INIS)
Ruan, D.; Li, X.; Eynde, G. van den
2000-01-01
In the framework of the on-going R and D project on fuzzy control applications to the Belgian Reactor 1 (BR1) at the Belgian Nuclear Research Centre (SCK-CEN), we have constructed a real fuzzy-logic-control demo model. The demo model is suitable for us to test and compare some new algorithms of fuzzy control and intelligent systems, which is advantageous because it is always difficult and time consuming, due to safety aspects, to do all experiments in a real nuclear environment. In this chapter, we first report briefly on the construction of the demo model, and then introduce the results of a fuzzy control, a proportional-integral-derivative (PID) control and an advanced fuzzy control, in which the advanced fuzzy control is a fuzzy control with an adaptive function that can self-regulate the fuzzy control rules. Afterwards, we present a comparative study of those three methods. The results have shown that fuzzy control has more advantages in terms of flexibility, robustness, and easily updated facilities with respect to the PID control of the demo model, but that PID control has much higher regulation resolution due to its integration terms. The adaptive fuzzy control can dynamically adjust the rule base, therefore it is more robust and suitable to those very uncertain occasions. (orig.)
Long time scale simulation of a grain boundary in copper
DEFF Research Database (Denmark)
Pedersen, A.; Henkelman, G.; Schiøtz, Jakob
2009-01-01
A general, twisted and tilted, grain boundary in copper has been simulated using the adaptive kinetic Monte Carlo method to study the atomistic structure of the non-crystalline region and the mechanism of annealing events that occur at low temperature. The simulated time interval spanned 67 mu s...... was also observed. In the final low-energy configurations, the thickness of the region separating the crystalline grains corresponds to just one atomic layer, in good agreement with reported experimental observations. The simulated system consists of 1307 atoms and atomic interactions were described using...
Simulation of a group of rangefinders adapted to alterations of measurement angle
Baikov, D. V.; Pastushkova, A. A.; Danshin, V. V.; Chepin, E. V.
2017-01-01
As part of the National Research Nuclear University of National Research Nuclear University MEPhI (MEPhI) at the Department of Computer Systems and Technologies working laboratory "Robotics." University teachers and laboratory staff implement a training program for master's program "Computer technology in robotics." Undergraduates and graduate students conduct laboratory research and development in several promising areas in robotics. One of the methodologies that are actively used in carrying out dissertation research is the modeling of advanced hardware and software systems, robotics. This article presents the results of such a study. The purpose of this article is to simulate a sensor comprised of a group of laser rangefinders. The rangefinders should be simulated according to the following principle. Beams will originate from one point though with a deviation from normal, providing thereby simultaneous scanning of different points. The data obtained in our virtual test room should be used to indicate an average distance from the device to obstacles for all the four sensors in real time. By leveling the divergence angle of the beams we can simulate different kinds of rangefinders (laser and ultrasonic ones). By adjusting noise parameters we can achieve results similar to those of real models (rangefinders), and obtain a surface map displaying irregularities. We should use a model of an aircraft (quadcopter) as a device to install the sensor. In the article we made an overview of works on rangefinder simulation undertaken at institutions around the world and performed tests. The article draws a conclusion about the relevance of the suggested approach, the methods used, necessity and feasibility of further research in this area.
International Nuclear Information System (INIS)
Michael J. Bockelie
2002-01-01
This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requires a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International
Pisano, E D; Zong, S; Hemminger, B M; DeLuca, M; Johnston, R E; Muller, K; Braeuning, M P; Pizer, S M
1998-11-01
The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.
Deng, Xiaolong; Dong, Haibo
2017-11-01
Developing a high-fidelity, high-efficiency numerical method for bio-inspired flow problems with flow-structure interaction is important for understanding related physics and developing many bio-inspired technologies. To simulate a fast-swimming big fish with multiple finlets or fish schooling, we need fine grids and/or a big computational domain, which are big challenges for 3-D simulations. In current work, based on the 3-D finite-difference sharp-interface immersed boundary method for incompressible flows (Mittal et al., JCP 2008), we developed an octree-like Adaptive Mesh Refinement (AMR) technique to enhance the computational ability and increase the computational efficiency. The AMR is coupled with a multigrid acceleration technique and a MPI +OpenMP hybrid parallelization. In this work, different AMR layers are treated separately and the synchronization is performed in the buffer regions and iterations are performed for the convergence of solution. Each big region is calculated by a MPI process which then uses multiple OpenMP threads for further acceleration, so that the communication cost is reduced. With these acceleration techniques, various canonical and bio-inspired flow problems with complex boundaries can be simulated accurately and efficiently. This work is supported by the MURI Grant Number N00014-14-1-0533 and NSF Grant CBET-1605434.
Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun
2018-03-01
This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.
International Nuclear Information System (INIS)
Kanaroglou, P.; Maoh, H.; Woudsma, C.; Marshall, S.
2008-01-01
Extreme weather events resulting from climate change will have a significant impact of the performance of the Canadian transportation system. This presentation described a simulation tool designed to investigate the potential ramifications of future climate change on transportation and the economy. The CLIMATE-C tool was designed to simulate future weather scenarios for the years 2020 and 2050 using weather parameters obtained from a global general circulation model. The model accounted for linkages between weather, transportation, and economic systems. A random utility-based multi-regional input-output model was used to predict inter-regional trade flows by truck and rail in Canada. Simulated weather scenarios were used to describe predicted changes in demographic, social, economic, technological and environmental developments to 2100. Various changes in population and economic growth were considered. Six additional scenarios were formulated to consider moderate and high rainfall events, moderate, high and extreme snowfall, and cold temperatures. Results of the preliminary analysis indicated that the model is sensitive to changes in weather events. Future research is needed to evaluate future weather scenarios and analyze weather-transport data in order to quantify travel speed reduction parameters. tabs., figs.
Semantic search via concept annealing
Dunkelberger, Kirk A.
2007-04-01
Annealing, in metallurgy and materials science, is a heat treatment wherein the microstructure of a material is altered, causing changes in its properties such as strength and hardness. We define concept annealing as a lexical, syntactic, and semantic expansion capability (the removal of defects and the internal stresses that cause term- and phrase-based search failure) coupled with a directed contraction capability (semantically-related terms, queries, and concepts nucleate and grow to replace those originally deformed by internal stresses). These two capabilities are tied together in a control loop mediated by the information retrieval precision and recall metrics coupled with intuition provided by the operator. The specific representations developed have been targeted at facilitating highly efficient and effective semantic indexing and searching. This new generation of Find capability enables additional processing (i.e. all-source tracking, relationship extraction, and total system resource management) at rates, precisions, and accuracies previously considered infeasible. In a recent experiment, an order magnitude reduction in time to actionable intelligence and nearly three orderss magnitude reduction in false alarm rate was achieved.
Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems
Energy Technology Data Exchange (ETDEWEB)
Omitaomu, Olufemi A [ORNL; Ganguly, Auroop R [ORNL; Jiao, Yu [ORNL
2009-01-01
In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.
Microstructure and texture evolution of cold-rolled deep-drawing steel sheet during annealing
Zhou, Le-yu; Wu, Lei; Liu, Ya-zheng; Cheng, Xiao-jie; Sun, Jin-hong
2013-06-01
In accordance with experimental results about the annealing microstructure and texture of cold-rolled deepdrawing sheet based on the compact strip production (CSP) process, a two-dimensional cellular automation simulation model, considering real space and time scale, was established to simulate recrystallization and grain growth during the actual batch annealing process. The simulation results show that pancaked grains form during recrystallization. {111} advantageous texture components become the main parts of the recrystallization texture. After grain growth, the pancaked grains coarsen gradually. The content of {111} advantageous texture components in the annealing texture increases from 55vol% to 65vol%; meanwhile, the contents of {112} and {100} texture components decrease by 4% and 8%, respectively, compared with the recrystallization texture. The simulation results of microstructure and texture evolution are also consistent with the experimental ones, proving the accuracy and usefulness of the model.
Directory of Open Access Journals (Sweden)
Jesus A Garrido Alcazar
2013-10-01
Full Text Available Adaptable gain regulation is at the core of the forward controller operation performed by the cerebro-cerebellar loops and it allows the intensity of motor acts to be finely tuned in a predictive manner. In order to learn and store information about body-object dynamics and to generate an internal model of movement, the cerebellum is thought to employ long-term synaptic plasticity. LTD at the PF-PC synapse has classically been assumed to subserve this function (Marr, 1969. However, this plasticity alone cannot account for the broad dynamic ranges and time scales of cerebellar adaptation. We therefore tested the role of plasticity distributed over multiple synaptic sites (Gao et al., 2012; Hansel et al., 2001 by generating an analog cerebellar model embedded into a control loop connected to a robotic simulator. The robot used a three-joint arm and performed repetitive fast manipulations with different masses along an 8-shape trajectory. In accordance with biological evidence, the cerebellum model was endowed with both LTD and LTP at the PF-PC, MF-DCN and PC-DCN synapses. This resulted in a network scheme whose effectiveness was extended considerably compared to one including just PF-PC synaptic plasticity. Indeed, the system including distributed plasticity reliably self-adapted to manipulate different masses and to learn the arm-object dynamics over a time course that included fast learning and consolidation, along the lines of what has been observed in behavioral tests. In particular, PF-PC plasticity operated as a time correlator between the actual input state and the system error, while MF-DCN and PC-DCN plasticity played a key role in generating the gain controller. This model suggests that distributed synaptic plasticity allows generation of the complex learning properties of the cerebellum. The incorporation of further plasticity mechanisms and of spiking signal processing will allow this concept to be extended in a more realistic
Directory of Open Access Journals (Sweden)
Wang Weng-Chung
2009-05-01
Full Text Available Abstract Background The aim of this study was to verify the effectiveness and efficacy of saving time and reducing burden for patients, nurses, and even occupational therapists through computer adaptive testing (CAT. Methods Based on an item bank of the Barthel Index (BI and the Frenchay Activities Index (FAI for assessing comprehensive activities of daily living (ADL function in stroke patients, we developed a visual basic application (VBA-Excel CAT module, and (1 investigated whether the averaged test length via CAT is shorter than that of the traditional all-item-answered non-adaptive testing (NAT approach through simulation, (2 illustrated the CAT multimedia on a tablet PC showing data collection and response errors of ADL clinical functional measures in stroke patients, and (3 demonstrated the quality control of endorsing scale with fit statistics to detect responding errors, which will be further immediately reconfirmed by technicians once patient ends the CAT assessment. Results The results show that endorsed items could be shorter on CAT (M = 13.42 than on NAT (M = 23 at 41.64% efficiency in test length. However, averaged ability estimations reveal insignificant differences between CAT and NAT. Conclusion This study found that mobile nursing services, placed at the bedsides of patients could, through the programmed VBA-Excel CAT module, reduce the burden to patients and save time, more so than the traditional NAT paper-and-pencil testing appraisals.
Directory of Open Access Journals (Sweden)
Glushchevsky Vyacheslav V.
2017-09-01
Full Text Available The article is concerned with addressing the topical problem of effectively countering real and potential threats to economic security of enterprises and reducing the risks of their occurrence. The article is aimed at simulating the adaptive mechanisms to counteract external influences on the marketing component of enterprise’s economic security and developing a system of measures for removing threats to price destabilization of its orders portfolio based on a modern economic-mathematical instrumentarium. The common causes of the threats occurrence related to the price policy of enterprise and the tactics of the contractual processes with the business partners have been explored. Hidden reserves for price maneuvering in concluding contracts with customers have been identified. An algorithmic model for an adaptive pricing task in terms of an assortment of industrial enterprise has been built. On the basis of this model, mechanisms have been developed to counteract the threats of occurrence and aggravation of a «price conflict» between the producing enterprise and the potential customers of its products, and to advise on how to remove the risks of their occurrence. Prospects for using the methodology together with the instrumentarium for economic-mathematical modeling in terms of tasks of the price risks management have been indicated.
Webb, Richard M.; Sandstrom, Mark W.; Jason L. Krutz,; Dale L. Shaner,
2011-01-01
In the present study a branched serial first-order decay (BSFOD) model is presented and used to derive transformation rates describing the decay of a common herbicide, atrazine, and its metabolites observed in unsaturated soils adapted to previous atrazine applications and in soils with no history of atrazine applications. Calibration of BSFOD models for soils throughout the country can reduce the uncertainty, relative to that of traditional models, in predicting the fate and transport of pesticides and their metabolites and thus support improved agricultural management schemes for reducing threats to the environment. Results from application of the BSFOD model to better understand the degradation of atrazine supports two previously reported conclusions: atrazine (6-chloro-N-ethyl-N′-(1-methylethyl)-1,3,5-triazine-2,4-diamine) and its primary metabolites are less persistent in adapted soils than in nonadapted soils; and hydroxyatrazine was the dominant primary metabolite in most of the soils tested. In addition, a method to simulate BSFOD in a one-dimensional solute-transport unsaturated zone model is also presented.
International Nuclear Information System (INIS)
Capeluto, I. Guedi; Ochoa, Carlos E.
2014-01-01
Vast amounts of the European residential stock were built with limited consideration for energy efficiency, yet its refurbishment can help reach national energy reduction goals, decreasing environmental impact. Short-term retrofits with reduced interference to inhabitants can be achieved by upgrading facades with elements that enhance energy efficiency and user comfort. The European Union-funded Meefs Retrofitting (Multifunctional Energy Efficient Façade System) project aims to develop an adaptable mass-produced facade system for energy improvement in existing residential buildings throughout the continent. This article presents a simplified methodology to identify preferred strategies and combinations for the early design stages of such system. This was derived from studying weather characteristics of European regions and outlining climatic energy-saving strategies based on human thermal comfort. Strategies were matched with conceptual technologies like glazing, shading and insulation. The typical building stock was characterized from statistics of previous European projects. Six improvements and combinations were modelled using a simulation model, identifying and ranking preferred configurations. The methodology is summarized in a synoptic scheme identifying the energy rankings of each improvement and combination for the studied climates and façade orientations. - Highlights: • First results of EU project for new energy efficient façade retrofit system. • System consists of prefabricated elements with multiple options for flexibility. • Modular strategies were determined that adapt to different climates. • Technologies matching the strategies were identified. • Presents a method for use and application in different climates across Europe
Mecklenburgh, J S; Mapleson, W W
1998-04-01
The aim of this study was to develop a lung model which adapted its active simulation of spontaneous breathing to the ventilatory assistance it received--an "aa" or "a-squared" lung model. The active element required was the waveform of negative pressure (pmus), which is equivalent to respiratory muscle activity. This had been determined previously in 12 healthy volunteers and comprised a contraction phase, relaxation phase and expiratory pause. Ventilatory assistance had shortened the contraction and relaxation phases without changing their shape, and lengthened the pause phase to compensate. In this study, the contraction and relaxation phases could be adequately represented by two quadratic equations, in addition to a third to provide a smooth transition. Therefore, the adaptive element required was the prediction of the duration of the contraction phase. The best predictive variables were flow at the end of contraction or peak mouth pressure. Determination of either of these allowed adjustment of the "standard" waveform to the level of assistance produced by an "average" ventilator, in a manner that matched the mean response of 12 healthy conscious subjects.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Owolabi, Kolade M.
2017-03-01
In this paper, some nonlinear space-fractional order reaction-diffusion equations (SFORDE) on a finite but large spatial domain x ∈ [0, L], x = x(x , y , z) and t ∈ [0, T] are considered. Also in this work, the standard reaction-diffusion system with boundary conditions is generalized by replacing the second-order spatial derivatives with Riemann-Liouville space-fractional derivatives of order α, for 0 Fourier spectral method is introduced as a better alternative to existing low order schemes for the integration of fractional in space reaction-diffusion problems in conjunction with an adaptive exponential time differencing method, and solve a range of one-, two- and three-components SFORDE numerically to obtain patterns in one- and two-dimensions with a straight forward extension to three spatial dimensions in a sub-diffusive (0 reaction-diffusion case. With application to models in biology and physics, different spatiotemporal dynamics are observed and displayed.
Impact of Annealing Thin Films In(OHxSy Growth By Solution Technique
Directory of Open Access Journals (Sweden)
Cliff Orori Mosiori
2017-07-01
Full Text Available Indium Hydroxy Sulphide has demonstrated abundance in resources, low prices, nontoxic characteristics, radiation resistance, high temperature resistance, and chemical stability, and therefore it has become an extremely important photoelectric, photovoltaic, and light sensing thin film material. Some treatment on this material include thermal annealing which is a process used for intrinsic stress liberation, structural improving, and surface roughness to control its electro-optical properties. In a qualitative way, annealing modifies surface morphology, intrinsic parameters, and electron mobility with temperature and time. In this work, an explanation on the surface modification of In(OHxSy thin films when subjected to an annealing process is discussed. Both electrical and optical effects caused by annealing were carried out and characterizations were performed at different annealing temperatures in nitrogen in the temperature range 373–573 K. Using optical measurements data and simulated data, Scout software was employed and the results showed that increasing annealing temperature causes a slight decrease in transmittance with a consequence of modifying the energy band gaps values between 2.79–3.32 eV. It was concluded that annealing influence optical transmittance and resistance of the film make the thin films potential for photovoltaic, and light sensing applications.
Directory of Open Access Journals (Sweden)
R. A. Prakapovich
2014-01-01
Full Text Available An adaptive neurocontroller for autonomous robotic vehicle control, which is designed to generate control signals (according to preprogrammed motion algorithm and to develop individual reactions to some external impacts during functioning process, that allows the robot to adapt to external environment changes, is suggested. To debug and test the proposed neurocontroller a specially designed program, able to simulate the sensory and executive systems operation of the robotic vehicle, is used.
Energy Technology Data Exchange (ETDEWEB)
Visbal, Jorge H. Wilches; Costa, Alessandro M. da, E-mail: jhwilchev@gmail.com [Universidade de Sao Paulo (USP), Ribeirão Preto, SP (Brazil). Faculdade de Filosofia, Ciências e Letras
2017-07-01
Clinical electron beams are composed of a mixture of pure electrons and Bremsstrahlung photons produced in the structures of the accelerator head as well as in the air. Accurate knowledge of these components is important for calculating the dose and for treatment planning. There are at least two approaches to deter-mine the contribution of the photons in the percentage depth dose of clinical electrons: a) Analytical Method that calculates the dose of the photons from the previous determination of the spectrum of the incident Bremsstrahlung photons; b) Adjustment method based on a semi-empirical biexponential formula where four parameters must be established from optimization methods. The results show that the generalized simulated annealing method can calculate the photon contamination dose by overestimating the dose in the tail no more than 0.6% of the maximum dose (electrons and photons). (author)
International Nuclear Information System (INIS)
Zhang Wen; Haas, Stephan
2009-01-01
An implementation of the fast multiple method (FMM) is performed for magnetic systems with long-ranged dipolar interactions. Expansion in spherical harmonics of the original FMM is replaced by expansion of polynomials in Cartesian coordinates, which is considerably simpler. Under open boundary conditions, an expression for multipole moments of point dipoles in a cell is derived. These make the program appropriate for nanomagnetic simulations, including magnetic nanoparticles and ferrofluids. The performance is optimized in terms of cell size and parameter set (expansion order and opening angle) and the trade off between computing time and accuracy is quantitatively studied. A rule of thumb is proposed to decide the appropriate average number of dipoles in the smallest cells, and an optimal choice of parameter set is suggested. Finally, the superiority of Cartesian coordinate FMM is demonstrated by comparison to spherical harmonics FMM and FFT.
Bode, Paul; Ostriker, Jeremiah P.
2003-03-01
An improved implementation of an N-body code for simulating collisionless cosmological dynamics is presented. TPM (tree particle-mesh) combines the PM method on large scales with a tree code to handle particle-particle interactions at small separations. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The new implementation includes individual particle time steps within trees, an improved treatment of tidal forces on trees, new criteria for higher force resolution and choice of time step, and parallel treatment of large trees. TPM is compared to P3M and a tree code (GADGET) and is found to give equivalent results in significantly less time. The implementation is highly portable (requiring a FORTRAN compiler and MPI) and efficient on parallel machines. The source code can be found on the World Wide Web.
Badilatti, Sandro D; Christen, Patrik; Parkinson, Ian; Müller, Ralph
2016-12-08
Osteoporosis is a major medical burden and its impact is expected to increase in our aging society. It is associated with low bone density and microstructural deterioration. Treatments are available, but the critical factor is to define individuals at risk from osteoporotic fractures. Computational simulations investigating not only changes in net bone tissue volume, but also changes in its microstructure where osteoporotic deterioration occur might help to better predict the risk of fractures. In this study, bone remodeling simulations with a mechanical feedback loop were used to predict microstructural changes due to osteoporosis and their impact on bone fragility from 50 to 80 years of age. Starting from homeostatic bone remodeling of a group of seven, mixed sex whole vertebrae, five mechanostat models mimicking different biological alterations associated with osteoporosis were developed, leading to imbalanced bone formation and resorption with a total net loss of bone tissue. A model with reduced bone formation rate and cell sensitivity led to the best match of morphometric indices compared to literature data and was chosen to predict postmenopausal osteoporotic bone loss in the whole group. Thirty years of osteoporotic bone loss were predicted with changes in morphometric indices in agreement with experimental measurements, and only showing major deviations in trabecular number and trabecular separation. In particular, although being optimized to match to the morphometric indices alone, the predicted bone loss revealed realistic changes on the organ level and on biomechanical competence. While the osteoporotic bone was able to maintain the mechanical stability to a great extent, higher fragility towards error loads was found for the osteoporotic bones. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bosworth, John T.; Williams-Hayes, Peggy S.
2010-01-01
Adaptive flight control systems have the potential to be more resilient to extreme changes in airplane behavior. Extreme changes could be a result of a system failure or of damage to the airplane. A direct adaptive neural-network-based flight control system was developed for the National Aeronautics and Space Administration NF-15B Intelligent Flight Control System airplane and subjected to an inflight simulation of a failed (frozen) (unmovable) stabilator. Formation flight handling qualities evaluations were performed with and without neural network adaptation. The results of these flight tests are presented. Comparison with simulation predictions and analysis of the performance of the adaptation system are discussed. The performance of the adaptation system is assessed in terms of its ability to decouple the roll and pitch response and reestablish good onboard model tracking. Flight evaluation with the simulated stabilator failure and adaptation engaged showed that there was generally improvement in the pitch response; however, a tendency for roll pilot-induced oscillation was experienced. A detailed discussion of the cause of the mixed results is presented.
Understanding the microwave annealing of silicon
Directory of Open Access Journals (Sweden)
Chaochao Fu
2017-03-01
Full Text Available Though microwave annealing appears to be very appealing due to its unique features, lacking an in-depth understanding and accurate model hinder its application in semiconductor processing. In this paper, the physics-based model and accurate calculation for the microwave annealing of silicon are presented. Both thermal effects, including ohmic conduction loss and dielectric polarization loss, and non-thermal effects are thoroughly analyzed. We designed unique experiments to verify the mechanism and extract relevant parameters. We also explicitly illustrate the dynamic interaction processes of the microwave annealing of silicon. This work provides an in-depth understanding that can expedite the application of microwave annealing in semiconductor processing and open the door to implementing microwave annealing for future research and applications.
Adaptation of the MAST passive current simulation model for real-time plasma control
International Nuclear Information System (INIS)
McArdle, G.J.; Taylor, D.
2008-01-01
Successful equilibrium reconstruction on MAST depends on a reliable estimate of the passive current induced in the thick vacuum vessel (which also acts as the load assembly) and other toroidally continuous internal support structures. For the EFIT reconstruction code, a pre-processing program takes the measured plasma and PF coil current evolution and uses a sectional model of the passive structure to solve the ODEs for electromagnetic induction. The results are written to a file, which is treated by EFIT as a set of virtual measurements of the passive current in each section. However, when a real-time version of EFIT was recently installed in the MAST plasma control system, a similar function was required for real-time estimation of the instantaneous passive current. This required several adaptation steps for the induction model to reduce the computational overhead to the absolute minimum, whilst preserving accuracy of the result. These include: ·conversion of the ODE to use an auxiliary variable, avoiding the need to calculate the time derivative of current; ·minimise the order of the system via model reduction techniques with a state-space representation of the problem; ·transformation to eigenmode form, to diagonalise the main matrix for faster computation; ·discretisation of the ODE; ·hand-optimisation to use vector instruction extensions in the real-time processor; ·splitting the task into two parts: the time-critical feedback part, and the next cycle pre-calculation part. After these optimisations, the algorithm was successfully implemented at a cost of just 65 μs per 500 μs control cycle, with only 27 μs added to the control latency. The results show good agreement with the original off-line version. Some of these optimisations have also been used subsequently to improve the performance of the off-line version
Yuan, Xuefei
2012-07-01
Numerical simulations of the four-field extended magnetohydrodynamics (MHD) equations with hyper-resistivity terms present a difficult challenge because of demanding spatial resolution requirements. A time-dependent sequence of . r-refinement adaptive grids obtained from solving a single Monge-Ampère (MA) equation addresses the high-resolution requirements near the . x-point for numerical simulation of the magnetic reconnection problem. The MHD equations are transformed from Cartesian coordinates to solution-defined curvilinear coordinates. After the application of an implicit scheme to the time-dependent problem, the parallel Newton-Krylov-Schwarz (NKS) algorithm is used to solve the system at each time step. Convergence and accuracy studies show that the curvilinear solution requires less computational effort than a pure Cartesian treatment. This is due both to the more optimal placement of the grid points and to the improved convergence of the implicit solver, nonlinearly and linearly. The latter effect, which is significant (more than an order of magnitude in number of inner linear iterations for equivalent accuracy), does not yet seem to be widely appreciated. © 2012 Elsevier Inc.
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
Zhu, Jiahua; Penfold, Scott N
2017-10-01
To investigate the feasibility of a 3D imaging system utilizing a 155 Eu source and pixelated cadmium-zinc-telluride (CZT) detector for applications in adaptive radiotherapy. Specifically, to compare the reconstructed stopping power ratio (SPR) values of a head phantom obtained with the proposed imaging technique with theoretical SPR values. A Geant4 Monte Carlo simulation was performed with the novel imaging system. The simulation was repeated with a typical 120 kV X-ray tube spectrum while maintaining all other parameters. Dual energy 155 Eu source cone beam computed tomography (CBCT) images were reconstructed with an iterative projection algorithm known as total variation superiorization with diagonally relaxed orthogonal projections (TVS-DROP). Single energy 120 kV source CBCT images were also reconstructed with TVS-DROP. Reconstructed images were converted to SPR with stoichiometric calibration techniques based on ICRU 44 tissues. Quantitative accuracy of reconstructed attenuation coefficient images as well as SPR images were compared. Images generated by gamma emissions of 155 Eu showed superior contrast resolution to those generated by the 120 kV spectrum. Quantitatively, all reconstructed images correlated with reference attenuation coefficients of the head phantom within 1 standard deviation. Images generated with the 155 Eu source showed a smaller standard deviation of pixel values. Use of a dual energy conversion into SPR resulted in superior SPR accuracy with the 155 Eu source. 155 Eu was found to display desirable qualities when used as a source for dual energy CBCT. Further work is required to demonstrate whether the simulation results presented here can be translated into an experimental prototype. © 2017 American Association of Physicists in Medicine.
Abadli, S.; Mansour, F.; Perrera, E. Bedel
We have investigated and modeled the complex phenomenon of boron (B) redistribution process in strongly doped silicon bilayers structure. A one-dimensional two stream transfer model well adapted to the particular structure of bi- layers and to the effects of strong-concentrations has been developed. This model takes into account the instantaneous kinetics of B transfer, trapping, clustering and segregation during the thermal B activation annealing. The used silicon bi-layers have been obtained by low pressure chemical vapor deposition (LPCVD) method, using in-situ nitrogen- doped-silicon (NiDoS) layer and strongly B doped polycrystalline-silicon (P+) layer. To avoid long redistributions, thermal annealing was carried out at relatively lowtemperatures (600 °C and 700 °C) for various times ranging between 30 minutes and 2 hours. The good adjustment of the simulated profiles with the experimental secondary ion mass spectroscopy (SIMS) profiles allowed a fundamental understanding about the instantaneous physical phenomena giving and disturbing the complex B redistribution profiles-shoulders kinetics.
Directory of Open Access Journals (Sweden)
Brijesh Yadav
2016-11-01
Full Text Available The present experiment was conducted to evaluate the effect of simulated heat stress on digestibility and methane (CH4 emission. Four non-lactating crossbred cattle were exposed to 25°C, 30°C, 35°C, and 40°C temperature with a relative humidity of 40% to 50% in a climatic chamber from 10:00 hours to 15:00 hours every day for 27 days. The physiological responses were recorded at 15:00 hours every day. The blood samples were collected at 15:00 hours on 1st, 6th, 11th, 16th, and 21st days and serum was collected for biochemical analysis. After 21 days, fecal and feed samples were collected continuously for six days for the estimation of digestibility. In the last 48 hours gas samples were collected continuously to estimate CH4 emission. Heat stress in experimental animals at 35°C and 40°C was evident from an alteration (p<0.05 in rectal temperature, respiratory rate, pulse rate, water intake and serum thyroxin levels. The serum lactate dehydrogenase, aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase activity and protein, urea, creatinine and triglyceride concentration changed (p<0.05, and body weight of the animals decreased (p<0.05 after temperature exposure at 40°C. The dry matter intake (DMI was lower (p<0.05 at 40°C exposure. The dry matter and neutral detergent fibre digestibilities were higher (p<0.05 at 35°C compared to 25°C and 30°C exposure whereas, organic matter (OM and acid detergent fibre digestibilities were higher (p<0.05 at 35°C than 40°C thermal exposure. The CH4 emission/kg DMI and organic matter intake (OMI declined (p<0.05 with increase in exposure temperature and reached its lowest levels at 40°C. It can be concluded from the present study that the digestibility and CH4 emission were affected by intensity of heat stress. Further studies are necessary with respect to ruminal microbial changes to justify the variation in the digestibility and CH4 emission during differential heat stress.
Precision Laser Annealing of Focal Plane Arrays
Energy Technology Data Exchange (ETDEWEB)
Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeRose, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Starbuck, Andrew Lea [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verley, Jason C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jenkins, Mark W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
We present results from laser annealing experiments in Si using a passively Q-switched Nd:YAG microlaser. Exposure with laser at fluence values above the damage threshold of commercially available photodiodes results in electrical damage (as measured by an increase in photodiode dark current). We show that increasing the laser fluence to values in excess of the damage threshold can result in annealing of a damage site and a reduction in detector dark current by as much as 100x in some cases. A still further increase in fluence results in irreparable damage. Thus we demonstrate the presence of a laser annealing window over which performance of damaged detectors can be at least partially reconstituted. Moreover dark current reduction is observed over the entire operating range of the diode indicating that device performance has been improved for all values of reverse bias voltage. Additionally, we will present results of laser annealing in Si waveguides. By exposing a small (<10 um) length of a Si waveguide to an annealing laser pulse, the longitudinal phase of light acquired in propagating through the waveguide can be modified with high precision, <15 milliradian per laser pulse. Phase tuning by 180 degrees is exhibited with multiple exposures to one arm of a Mach-Zehnder interferometer at fluence values below the morphological damage threshold of an etched Si waveguide. No reduction in optical transmission at 1550 nm was found after 220 annealing laser shots. Modeling results for laser annealing in Si are also presented.