WorldWideScience

Sample records for maximin adaptive-array algorithm

  1. Maximin equilibrium

    NARCIS (Netherlands)

    Ismail, M.S.

    2014-01-01

    We introduce a new concept which extends von Neumann and Morgenstern's maximin strategy solution by incorporating `individual rationality' of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that

  2. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  3. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  4. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    Directory of Open Access Journals (Sweden)

    Hailong Xu

    2016-03-01

    Full Text Available Nowadays, software-defined radio (SDR has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP and Space-Frequency Adaptive Processing (SFAP are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  5. practical common weight maximin approach for technology selection

    African Journals Online (AJOL)

    2014-06-30

    Jun 30, 2014 ... Keywords: Technology selection, Robot selection, Maximin .... manufacturers have consequently oriented the researchers to ..... value judgements in data envelopment analysis: Evolution, development and future directions',.

  6. Practical Common Weight Maximin Approach for Technology ...

    African Journals Online (AJOL)

    Its robustness and discriminating power are illustrated via a previously reported robot evaluation problem by comparing the ranking obtained by the proposed Maximin approach framework with that obtained by the DEA classic model (CCR model) and Minimax method (Karsak & Ahiska,2005). Because the number of ...

  7. Maxi-Min Language Use A Critical Remark on a Concept by Philippe van Parijs

    Directory of Open Access Journals (Sweden)

    Kruse Jan

    2016-10-01

    Full Text Available Philippe van Parijs explains in Linguistic Justice for Europe and for the World the concept of maxi-min language use as a process of language choice. He suggests that the language chosen as a common language should maximize the minimal competence of a community. Within a multilingual group of people, the chosen language is the language known best by a participant who knows it least. For obvious reasons, only English would qualify for having that status. This article argues that maxi-min is rather a normative concept, not only because the process itself remains empirically unfounded. Moreover, language choice is the result of complex social and psychological structures. As a descriptive process, the maxi-min choice happens in the reality fairly seldom, whereas the max-min use of languages seen as a normative process could be a very effective tool to measure linguistic justice.

  8. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  9. Millimeter-Wave Microstrip Antenna Array Design and an Adaptive Algorithm for Future 5G Wireless Communication Systems

    Directory of Open Access Journals (Sweden)

    Cheng-Nan Hu

    2016-01-01

    Full Text Available This paper presents a high gain millimeter-wave (mmW low-temperature cofired ceramic (LTCC microstrip antenna array with a compact, simple, and low-profile structure. Incorporating minimum mean square error (MMSE adaptive algorithms with the proposed 64-element microstrip antenna array, the numerical investigation reveals substantial improvements in interference reduction. A prototype is presented with a simple design for mass production. As an experiment, HFSS was used to simulate an antenna with a width of 1 mm and a length of 1.23 mm, resonating at 38 GHz. Two identical mmW LTCC microstrip antenna arrays were built for measurement, and the center element was excited. The results demonstrated a return loss better than 15 dB and a peak gain higher than 6.5 dBi at frequencies of interest, which verified the feasibility of the design concept.

  10. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  12. Adaptive algorithm of magnetic heading detection

    Science.gov (United States)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  13. Compressive strength evolution of thermally-stressed Saint Maximin limestone.

    Science.gov (United States)

    Farquharson, J.; Griffiths, L.; Baud, P.; Wadsworth, F. B.; Heap, M. J.

    2017-12-01

    The Saint Maximin quarry (Oise, France) opened in the early 1600s, and its limestone has been used extensively as masonry stone, particularly during the classical era of Parisian architecture from the 17th century onwards. Its widespread use has been due to a combination of its regional availability, its high workability, and its aesthetic appeal. Notable buildings completed using this material include sections of the Place de la Concorde and the Louvre in Paris. More recently, however, it has seen increasing use in the construction of large private residences throughout the United States as well as extensions to private institutions such as Stanford University. For any large building, fire hazard can be a substantial concern, especially in tectonically active areas where catastrophic fires may arise following large-magnitude earthquakes. Typically, house fires burn at temperatures of around 600 °C ( 1000 F). Given the ubiquity of this geomaterial as a building stone, it is important to ascertain the influence of heating on the strength of Saint Maximin limestone (SML), and in turn the structural stability of the buildings it is used in. We performed a series of compressive tests and permeability measurements on samples of SML to determine its strength evolution in response to heating to incrementally higher temperatures. We observe that the uniaxial compressive strength of SML decreases from >12 MPa at room temperature to 400 °C). We anticipate that this substantial weakening is in part a result of thermal microcracking, whereby changes in temperature induce thermal stresses due to a mismatch in thermal expansion between the constituent grains. This mechanism is compounded by the volumetric increase of quartz through its alpha - beta transition at 573 °C, and by the thermal decomposition of calcite. To track the formation of thermal microcracks, we monitor acoustic emissions, a common proxy for microcracking, during the heating of an SML sample. The

  14. Adaptive algorithm based on antenna arrays for radio communication systems

    Directory of Open Access Journals (Sweden)

    Fedosov Valentin

    2017-01-01

    Full Text Available Trends in the modern world increasingly lead to the growing popularity of wireless technologies. This is possible due to the rapid development of mobile communications, the Internet gaining high popularity, using wireless networks at enterprises, offices, buildings, etc. It requires advanced network technologies with high throughput capacity to meet the needs of users. To date, a popular destination is the development of spatial signal processing techniques allowing to increase spatial bandwidth of communication channels. The most popular method is spatial coding MIMO to increase data transmission speed which is carried out due to several spatial streams emitted by several antennas. Another advantage of this technology is the bandwidth increase to be achieved without expanding the specified frequency range. Spatial coding methods are even more attractive due to a limited frequency resource. Currently, there is an increasing use of wireless communications (for example, WiFi and WiMAX in information transmission networks. One of the main problems of evolving wireless systems is the need to increase bandwidth and improve the quality of service (reducing the error probability. Bandwidth can be increased by expanding the bandwidth or increasing the radiated power. Nevertheless, the application of these methods has some drawbacks, due to the requirements of biological protection and electromagnetic compatibility, the increase of power and the expansion of the frequency band is limited. This problem is especially relevant in mobile (cellular communication systems and wireless networks operating in difficult signal propagation conditions. One of the most effective ways to solve this problem is to use adaptive antenna arrays with weakly correlated antenna elements. Communication systems using such antennas are called MIMO systems (Multiple Input Multiple Output multiple input - multiple outputs. At the moment, existing MIMO-idea implementations do not

  15. Demosaicking algorithm for the Kodak-RGBW color filter array

    Science.gov (United States)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  16. Introduction to adaptive arrays

    CERN Document Server

    Monzingo, Bob; Haupt, Randy

    2011-01-01

    This second edition is an extensive modernization of the bestselling introduction to the subject of adaptive array sensor systems. With the number of applications of adaptive array sensor systems growing each year, this look at the principles and fundamental techniques that are critical to these systems is more important than ever before. Introduction to Adaptive Arrays, 2nd Edition is organized as a tutorial, taking the reader by the hand and leading them through the maze of jargon that often surrounds this highly technical subject. It is easy to read and easy to follow as fundamental concept

  17. Optimization of partially shaded PV array using a modified P&O MPPT algorithm

    Directory of Open Access Journals (Sweden)

    Abdelaziz YOUCEF

    2016-07-01

    Full Text Available A photovoltaic (PV array generated power is directly affected by temperature, solar irradiation, shading, and array configuration. In practice, PV arrays could be partially shaded by could, buildings, trees and other utilities. In this case, multiple maximums appear in the P-V curve, a global maximum and one or several local maximums. The “perturb and observe“ (P&O maximum power point tracking (MPPT algorithm cannot differentiate between a global and a local maximum and it is therefore ineffective when partial shading occurs. First, this paper presents an original mathematical model of the P-V curve of a partially shaded PV array, that was used to perform a simulation study in order to show the P&O algorithm inability to track the global MPP of a PV array solar system under partial shading for low shading irradiation levels, then an adaptation sub algorithm is proposed to be added to the P&O algorithm in order to give it the ability to track the global MPP. This sub algorithm moves the operating point imposed by the partial shading configuration to a point in the vicinity of the global MPP in order to be easily tracked by the P&O algorithm. In the simulation, a PV array with a hundred modules has been considered by using a light, a medium then a severe shading configuration. The results obtained indicate that the proposed modified P&O algorithm is able to track the global MPP for the considered shading configurations and for any shading irradiation level.

  18. Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array

    Science.gov (United States)

    Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert

    2002-05-01

    A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.

  19. New constraints for holographic entropy from maximin: A no-go theorem

    Science.gov (United States)

    Rota, Massimiliano; Weinberg, Sean J.

    2018-04-01

    The Ryu-Takayanagi (RT) formula for static spacetimes arising in the AdS/CFT correspondence satisfies inequalities that are not yet proven in the case of the Rangamani-Hubeny-Takayanagi (HRT) formula, which applies to general dynamical spacetimes. Wall's maximin construction is the only known technique for extending inequalities of holographic entanglement entropy from the static to dynamical case. We show that this method currently has no further utility when dealing with inequalities for five or fewer regions. Despite this negative result, we propose the validity of one new inequality for covariant holographic entanglement entropy for five regions. This inequality, while not maximin provable, is much weaker than many of the inequalities satisfied by the RT formula and should therefore be easier to prove. If it is valid, then there is strong evidence that holographic entanglement entropy plays a role in general spacetimes including those that arise in cosmology. Our new inequality is obtained by the assumption that the HRT formula satisfies every known balanced inequality obeyed by the Shannon entropies of classical probability distributions. This is a property that the RT formula has been shown to possess and which has been previously conjectured to hold for quantum mechanics in general.

  20. CRISPRDetect: A flexible algorithm to define CRISPR arrays.

    Science.gov (United States)

    Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M

    2016-05-17

    CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.

  1. A Novel DOA Estimation Algorithm Using Array Rotation Technique

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lan

    2014-03-01

    Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.

  2. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  3. Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Shaobing Huang

    2017-01-01

    Full Text Available Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.

  4. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  5. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  6. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  7. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  8. Distinguishing Byproducts from Non-Adaptive Effects of Algorithmic Adaptations

    Directory of Open Access Journals (Sweden)

    Justin H. Park

    2007-01-01

    Full Text Available I evaluate the use of the byproduct concept in psychology, particularly the adaptation-byproduct distinction that is commonly invoked in discussions of psychological phenomena. This distinction can be problematic when investigating algorithmic mechanisms and their effects, because although all byproducts may be functionless concomitants of adaptations, not all incidental effects of algorithmic adaptations are byproducts (although they have sometimes been labeled as such. I call attention to Sperber's (1994 distinction between proper domains and actual domains of algorithmic mechanisms. Extending Sperber's distinction, I propose the terms adaptive effects and non-adaptive effects, which more accurately capture the phenomena of interest to psychologists and prevent fruitless adaptation-versus-byproduct debates.

  9. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    Science.gov (United States)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  10. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  11. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  12. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  13. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array.

    Science.gov (United States)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-20

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  14. Pattern Nulling of Linear Antenna Arrays Using Backtracking Search Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kerim Guney

    2015-01-01

    Full Text Available An evolutionary method based on backtracking search optimization algorithm (BSA is proposed for linear antenna array pattern synthesis with prescribed nulls at interference directions. Pattern nulling is obtained by controlling only the amplitude, position, and phase of the antenna array elements. BSA is an innovative metaheuristic technique based on an iterative process. Various numerical examples of linear array patterns with the prescribed single, multiple, and wide nulls are given to illustrate the performance and flexibility of BSA. The results obtained by BSA are compared with the results of the following seventeen algorithms: particle swarm optimization (PSO, genetic algorithm (GA, modified touring ant colony algorithm (MTACO, quadratic programming method (QPM, bacterial foraging algorithm (BFA, bees algorithm (BA, clonal selection algorithm (CLONALG, plant growth simulation algorithm (PGSA, tabu search algorithm (TSA, memetic algorithm (MA, nondominated sorting GA-2 (NSGA-2, multiobjective differential evolution (MODE, decomposition with differential evolution (MOEA/D-DE, comprehensive learning PSO (CLPSO, harmony search algorithm (HSA, seeker optimization algorithm (SOA, and mean variance mapping optimization (MVMO. The simulation results show that the linear antenna array synthesis using BSA provides low side-lobe levels and deep null levels.

  15. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  16. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  17. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  18. Solving “Antenna Array Thinning Problem” Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Rajashree Jain

    2012-01-01

    Full Text Available Thinning involves reducing total number of active elements in an antenna array without causing major degradation in system performance. Dynamic thinning is the process of achieving this under real-time conditions. It is required to find a strategic subset of antenna elements for thinning so as to have its optimum performance. From a mathematical perspective this is a nonlinear, multidimensional problem with multiple objectives and many constraints. Solution for such problem cannot be obtained by classical analytical techniques. It will be required to employ some type of search algorithm which can lead to a practical solution in an optimal. The present paper discusses an approach of using genetic algorithm for array thinning. After discussing the basic concept involving antenna array, array thinning, dynamic thinning, and application methodology, simulation results of applying the technique to linear and planar arrays are presented.

  19. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  20. Convergence Performance of Adaptive Algorithms of L-Filters

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2003-01-01

    Full Text Available This paper deals with convergence parameters determination of adaptive algorithms, which are used in adaptive L-filters design. Firstly the stability of adaptation process, convergence rate or adaptation time, and behaviour of convergence curve belong among basic properties of adaptive algorithms. L-filters with variety of adaptive algorithms were used to their determination. Convergence performances finding of adaptive filters is important mainly for their hardware applications, where filtration in real time or adaptation of coefficient filter with low capacity of input data are required.

  1. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  2. Active cancellation of probing in linear dipole phased array

    CERN Document Server

    Singh, Hema; Jha, Rakesh Mohan

    2015-01-01

    In this book, a modified improved LMS algorithm is employed for weight adaptation of dipole array for the generation of beam pattern in multiple signal environments. In phased arrays, the generation of adapted pattern according to the signal scenario requires an efficient adaptive algorithm. The antenna array is expected to maintain sufficient gain towards each of the desired source while at the same time suppress the probing sources. This cancels the signal transmission towards each of the hostile probing sources leading to active cancellation. In the book, the performance of dipole phased array is demonstrated in terms of fast convergence, output noise power and output signal-to-interference-and noise ratio. The mutual coupling effect and role of edge elements are taken into account. It is established that dipole array along with an efficient algorithm is able to maintain multilobe beamforming with accurate and deep nulls towards each probing source. This work has application to the active radar cross secti...

  3. Real-time algorithm for acoustic imaging with a microphone array.

    Science.gov (United States)

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  4. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  5. Detection of Defective Sensors in Phased Array Using Compressed Sensing and Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Shafqat Ullah Khan

    2016-01-01

    Full Text Available A compressed sensing based array diagnosis technique has been presented. This technique starts from collecting the measurements of the far-field pattern. The system linking the difference between the field measured using the healthy reference array and the field radiated by the array under test is solved using a genetic algorithm (GA, parallel coordinate descent (PCD algorithm, and then a hybridized GA with PCD algorithm. These algorithms are applied for fully and partially defective antenna arrays. The simulation results indicate that the proposed hybrid algorithm outperforms in terms of localization of element failure with a small number of measurements. In the proposed algorithm, the slow and early convergence of GA has been avoided by combining it with PCD algorithm. It has been shown that the hybrid GA-PCD algorithm provides an accurate diagnosis of fully and partially defective sensors as compared to GA or PCD alone. Different simulations have been provided to validate the performance of the designed algorithms in diversified scenarios.

  6. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    Science.gov (United States)

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  7. Adaptive Beamforming Based on Complex Quaternion Processes

    Directory of Open Access Journals (Sweden)

    Jian-wu Tao

    2014-01-01

    Full Text Available Motivated by the benefits of array signal processing in quaternion domain, we investigate the problem of adaptive beamforming based on complex quaternion processes in this paper. First, a complex quaternion least-mean squares (CQLMS algorithm is proposed and its performance is analyzed. The CQLMS algorithm is suitable for adaptive beamforming of vector-sensor array. The weight vector update of CQLMS algorithm is derived based on the complex gradient, leading to lower computational complexity. Because the complex quaternion can exhibit the orthogonal structure of an electromagnetic vector-sensor in a natural way, a complex quaternion model in time domain is provided for a 3-component vector-sensor array. And the normalized adaptive beamformer using CQLMS is presented. Finally, simulation results are given to validate the performance of the proposed adaptive beamformer.

  8. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  9. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald

    2014-01-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  10. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  11. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    Science.gov (United States)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  12. Genetic Algorithms for Case Adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Salem, A M [Computer Science Dept, Faculty of Computer and Information Sciences, Ain Shams University, Cairo (Egypt); Mohamed, A H [Solid State Dept., (NCRRT), Cairo (Egypt)

    2008-07-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems.

  13. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems

  14. Adaptive sensor fusion using genetic algorithms

    International Nuclear Information System (INIS)

    Fitzgerald, D.S.; Adams, D.G.

    1994-01-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ''fuzzy'' sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion

  15. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  16. Computationally efficient optimisation algorithms for WECs arrays

    DEFF Research Database (Denmark)

    Ferri, Francesco

    2017-01-01

    In this paper two derivative-free global optimization algorithms are applied for the maximisation of the energy absorbed by wave energy converter (WEC) arrays. Wave energy is a large and mostly untapped source of energy that could have a key role in the future energy mix. The collection of this r...

  17. NeatSort - A practical adaptive algorithm

    OpenAIRE

    La Rocca, Marcello; Cantone, Domenico

    2014-01-01

    We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of...

  18. Synthesis of Thinned Concentric Circular Antenna Arrays Using Modified TLBO Algorithm

    Directory of Open Access Journals (Sweden)

    Zailei Luo

    2015-01-01

    Full Text Available Teaching-learning-based optimization (TLBO algorithm is a new kind of stochastic metaheuristic algorithm which has been proven effective and powerful in many engineering optimization problems. This paper describes the application of a modified version of TLBO algorithm, MTLBO, for synthesis of thinned concentric circular antenna arrays (CCAAs. The MTLBO is adjusted for CCAA design according to the geometry arrangement of antenna elements. CCAAs with uniform interelement spacing fixed at half wavelength have been considered for thinning using MTLBO algorithm. For practical purpose, this paper demonstrated SLL reduction of thinned CCAAs in the whole regular and extended space other than the phi = 0° plane alone. The uniformly and nonuniformly excited CCAAs have been discussed, respectively, during the simulation process. The proposed MTLBO is very easy to be implemented and requires fewer algorithm specified parameters, which is suitable for concentric circular antenna array synthesis. Numerical results clearly show the superiority of MTLBO algorithm in finding optimum solutions compared to particle swarm optimization algorithm and firefly algorithm.

  19. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  20. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  1. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  2. A FPC-ROOT Algorithm for 2D-DOA Estimation in Sparse Array

    Directory of Open Access Journals (Sweden)

    Wenhao Zeng

    2016-01-01

    Full Text Available To improve the performance of two-dimensional direction-of-arrival (2D DOA estimation in sparse array, this paper presents a Fixed Point Continuation Polynomial Roots (FPC-ROOT algorithm. Firstly, a signal model for DOA estimation is established based on matrix completion and it can be proved that the proposed model meets Null Space Property (NSP. Secondly, left and right singular vectors of received signals matrix are achieved using the matrix completion algorithm. Finally, 2D DOA estimation can be acquired through solving the polynomial roots. The proposed algorithm can achieve high accuracy of 2D DOA estimation in sparse array, without solving autocorrelation matrix of received signals and scanning of two-dimensional spectral peak. Besides, it decreases the number of antennas and lowers computational complexity and meanwhile avoids the angle ambiguity problem. Computer simulations demonstrate that the proposed FPC-ROOT algorithm can obtain the 2D DOA estimation precisely in sparse array.

  3. Small-angle tomography algorithm for transmission inspection of acoustic linear array

    Directory of Open Access Journals (Sweden)

    Soldatov Alexey

    2016-01-01

    Full Text Available The paper describes the algorithm of reconstruction of tomographic image used in the through-transition method in a small angle sounding of acoustic linear arrays and the results of practical application of the proposed algorithm. In alternate probing of each element of emitting array and simultaneous reception of all elements of the receiving array is a collection of shadow images of the testing zone. The testing zone is divided into small local areas and using the collection of shadow images computed matrix normalized transmission coefficients for each of the small local area. Tomographic image control zone is obtained by submitting the resulting matrix of normalized transmission coefficients in grayscale or colors.

  4. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  5. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  6. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  7. Parallel DC3 Algorithm for Suffix Array Construction on Many-Core Accelerators

    KAUST Repository

    Liao, Gang

    2015-05-01

    In bioinformatics applications, suffix arrays are widely used to DNA sequence alignments in the initial exact match phase of heuristic algorithms. With the exponential growth and availability of data, using many-core accelerators, like GPUs, to optimize existing algorithms is very common. We present a new implementation of suffix array on GPU. As a result, suffix array construction on GPU achieves around 10x speedup on standard large data sets, which contain more than 100 million characters. The idea is simple, fast and scalable that can be easily scale to multi-core processors and even heterogeneous architectures. © 2015 IEEE.

  8. Parallel DC3 Algorithm for Suffix Array Construction on Many-Core Accelerators

    KAUST Repository

    Liao, Gang; Ma, Longfei; Zang, Guangming; Tang, Lin

    2015-01-01

    In bioinformatics applications, suffix arrays are widely used to DNA sequence alignments in the initial exact match phase of heuristic algorithms. With the exponential growth and availability of data, using many-core accelerators, like GPUs, to optimize existing algorithms is very common. We present a new implementation of suffix array on GPU. As a result, suffix array construction on GPU achieves around 10x speedup on standard large data sets, which contain more than 100 million characters. The idea is simple, fast and scalable that can be easily scale to multi-core processors and even heterogeneous architectures. © 2015 IEEE.

  9. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  10. Metaheuristic algorithms for building Covering Arrays: A review

    Directory of Open Access Journals (Sweden)

    Jimena Adriana Timaná-Peña

    2016-09-01

    Full Text Available Covering Arrays (CA are mathematical objects used in the functional testing of software components. They enable the testing of all interactions of a given size of input parameters in a procedure, function, or logical unit in general, using the minimum number of test cases. Building CA is a complex task (NP-complete problem that involves lengthy execution times and high computational loads. The most effective methods for building CAs are algebraic, Greedy, and metaheuristic-based. The latter have reported the best results to date. This paper presents a description of the major contributions made by a selection of different metaheuristics, including simulated annealing, tabu search, genetic algorithms, ant colony algorithms, particle swarm algorithms, and harmony search algorithms. It is worth noting that simulated annealing-based algorithms have evolved as the most competitive, and currently form the state of the art.

  11. Two-dimensional Fast ESPRIT Algorithm for Linear Array SAR Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Yi-chao

    2015-10-01

    Full Text Available The linear array Synthetic Aperture Radar (SAR system is a popular research tool, because it can realize three-dimensional imaging. However, owning to limitations of the aircraft platform and actual conditions, resolution improvement is difficult in cross-track and along-track directions. In this study, a twodimensional fast Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT algorithm for linear array SAR imaging is proposed to overcome these limitations. This approach combines the Gerschgorin disks method and the ESPRIT algorithm to estimate the positions of scatterers in cross and along-rack directions. Moreover, the reflectivity of scatterers is obtained by a modified pairing method based on “region growing”, replacing the least-squares method. The simulation results demonstrate the applicability of the algorithm with high resolution, quick calculation, and good real-time response.

  12. Adaptive switching gravitational search algorithm: an attempt to ...

    Indian Academy of Sciences (India)

    Nor Azlina Ab Aziz

    An adaptive gravitational search algorithm (GSA) that switches between synchronous and ... genetic algorithm (GA), bat-inspired algorithm (BA) and grey wolf optimizer (GWO). ...... heuristic with applications in applied electromagnetics. Prog.

  13. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  14. An Enhanced Jaya Algorithm with a Two Group Adaption

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2017-01-01

    Full Text Available This paper proposes a novel performance enhanced Jaya algorithm with a two group adaption (E-Jaya. Two improvements are presented in E-Jaya. First, instead of using the best and the worst values in Jaya algorithm, EJaya separates all candidates into two groups: the better and the worse groups based on their fitness values, then the mean of the better group and the mean of the worse group are used. Second, in order to add non algorithm-specific parameters in E-Jaya, a novel adaptive method of dividing the two groups has been developed. Finally, twelve benchmark functions with different dimensionality, such as 40, 60, and 100, were evaluated using the proposed EJaya algorithm. The results show that E-Jaya significantly outperformed Jaya algorithm in terms of the solution accuracy. Additionally, E-Jaya was also compared with a differential evolution (DE, a self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. E-Jaya algorithm outperforms all the algorithms.

  15. Adaptive discrete-ordinates algorithms and strategies

    International Nuclear Information System (INIS)

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  16. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  17. Frequency-domain imaging algorithm for ultrasonic testing by application of matrix phased arrays

    Directory of Open Access Journals (Sweden)

    Dolmatov Dmitry

    2017-01-01

    Full Text Available Constantly increasing demand for high-performance materials and systems in aerospace industry requires advanced methods of nondestructive testing. One of the most promising methods is ultrasonic imaging by using matrix phased arrays. This technique allows to create three-dimensional ultrasonic imaging with high lateral resolution. Further progress in matrix phased array ultrasonic testing is determined by the development of fast imaging algorithms. In this article imaging algorithm based on frequency domain calculations is proposed. This approach is computationally efficient in comparison with time domain algorithms. Performance of the proposed algorithm was tested via computer simulations for planar specimen with flat bottom holes.

  18. Adaptation of Rejection Algorithms for a Radar Clutter

    Directory of Open Access Journals (Sweden)

    D. Popov

    2017-09-01

    Full Text Available In this paper, the algorithms for adaptive rejection of a radar clutter are synthesized for the case of a priori unknown spectral-correlation characteristics at wobbulation of a repetition period of the radar signal. The synthesis of algorithms for the non-recursive adaptive rejection filter (ARF of a given order is reduced to determination of the vector of weighting coefficients, which realizes the best effectiveness index for radar signal extraction from the moving targets on the background of the received clutter. As the effectiveness criterion, we consider the averaged (over the Doppler signal phase shift improvement coefficient for a signal-to-clutter ratio (SCR. On the base of extreme properties of the characteristic numbers (eigennumbers of the matrices, the optimal vector (according to this criterion maximum is defined as the eigenvector of the clutter correlation matrix corresponding to its minimal eigenvalue. The general type of the vector of optimal ARF weighting coefficients is de-termined and specific adaptive algorithms depending upon the ARF order are obtained, which in the specific cases can be reduced to the known algorithms confirming its authenticity. The comparative analysis of the synthesized and known algorithms is performed. Significant bene-fits are established in clutter rejection effectiveness by the offered processing algorithms compared to the known processing algorithms.

  19. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  20. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Directory of Open Access Journals (Sweden)

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  1. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    Science.gov (United States)

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  2. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  3. Adaptive ground implemented phase array

    Science.gov (United States)

    Spearing, R. E.

    1973-01-01

    The simulation of an adaptive ground implemented phased array of five antenna elements is reported for a very high frequency system design that is tolerant to the radio frequency interference environment encountered by a tracking data relay satellite. Signals originating from satellites are received by the VHF ring array and both horizontal and vertical polarizations from each of the five elements are multiplexed and transmitted down to ground station. A panel on the transmitting end of the simulation chamber contains up to 10 S-band RFI sources along with the desired signal to simulate the dynamic relationship between user and TDRS. The 10 input channels are summed, and desired and interference signals are separated and corrected until the resultant sum signal-to-interference ratio is maximized. Testing performed with this simulation equipment demonstrates good correlation between predicted and actual results.

  4. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  5. DOA and Polarization Estimation Using an Electromagnetic Vector Sensor Uniform Circular Array Based on the ESPRIT Algorithm.

    Science.gov (United States)

    Wu, Na; Qu, Zhiyu; Si, Weijian; Jiao, Shuhong

    2016-12-13

    In array signal processing systems, the direction of arrival (DOA) and polarization of signals based on uniform linear or rectangular sensor arrays are generally obtained by rotational invariance techniques (ESPRIT). However, since the ESPRIT algorithm relies on the rotational invariant structure of the received data, it cannot be applied to electromagnetic vector sensor arrays (EVSAs) featuring uniform circular patterns. To overcome this limitation, a fourth-order cumulant-based ESPRIT algorithm is proposed in this paper, for joint estimation of DOA and polarization based on a uniform circular EVSA. The proposed algorithm utilizes the fourth-order cumulant to obtain a virtual extended array of a uniform circular EVSA, from which the pairs of rotation invariant sub-arrays are obtained. The ESPRIT algorithm and parameter pair matching are then utilized to estimate the DOA and polarization of the incident signals. The closed-form parameter estimation algorithm can effectively reduce the computational complexity of the joint estimation, which has been demonstrated by numerical simulations.

  6. ERAASR: an algorithm for removing electrical stimulation artifacts from multielectrode array recordings

    Science.gov (United States)

    O'Shea, Daniel J.; Shenoy, Krishna V.

    2018-04-01

    Objective. Electrical stimulation is a widely used and effective tool in systems neuroscience, neural prosthetics, and clinical neurostimulation. However, electrical artifacts evoked by stimulation prevent the detection of spiking activity on nearby recording electrodes, which obscures the neural population response evoked by stimulation. We sought to develop a method to clean artifact-corrupted electrode signals recorded on multielectrode arrays in order to recover the underlying neural spiking activity. Approach. We created an algorithm, which performs estimation and removal of array artifacts via sequential principal components regression (ERAASR). This approach leverages the similar structure of artifact transients, but not spiking activity, across simultaneously recorded channels on the array, across pulses within a train, and across trials. The ERAASR algorithm requires no special hardware, imposes no requirements on the shape of the artifact or the multielectrode array geometry, and comprises sequential application of straightforward linear methods with intuitive parameters. The approach should be readily applicable to most datasets where stimulation does not saturate the recording amplifier. Main results. The effectiveness of the algorithm is demonstrated in macaque dorsal premotor cortex using acute linear multielectrode array recordings and single electrode stimulation. Large electrical artifacts appeared on all channels during stimulation. After application of ERAASR, the cleaned signals were quiescent on channels with no spontaneous spiking activity, whereas spontaneously active channels exhibited evoked spikes which closely resembled spontaneously occurring spiking waveforms. Significance. We hope that enabling simultaneous electrical stimulation and multielectrode array recording will help elucidate the causal links between neural activity and cognition and facilitate naturalistic sensory protheses.

  7. AMOBH: Adaptive Multiobjective Black Hole Algorithm.

    Science.gov (United States)

    Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen

    2017-01-01

    This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.

  8. An adaptive inverse kinematics algorithm for robot manipulators

    Science.gov (United States)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  9. Research of Improved Apriori Algorithm Based on Itemset Array

    Directory of Open Access Journals (Sweden)

    Naili Liu

    2013-06-01

    Full Text Available Mining frequent item sets is a major key process in data mining research. Apriori and many improved algorithms are lowly efficient because they need scan database many times and storage transaction ID in memory, so time and space overhead is very high. Especially, they are lower efficient when they process large scale database. The main task of the improved algorithm is to reduce time and space overhead for mining frequent item sets. Because, it scans database only once to generate binary item set array, it adopts binary instead of transaction ID when it storages transaction flag, it adopts logic AND operation to judge whether an item set is frequent item set. Moreover, the improved algorithm is more suitable for large scale database. Experimental results show that the improved algorithm has better efficiency than classic Apriori algorithm.

  10. Nature-inspired Cuckoo Search Algorithm for Side Lobe Suppression in a Symmetric Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    K. N. Abdul Rani

    2012-09-01

    Full Text Available In this paper, we proposed a newly modified cuckoo search (MCS algorithm integrated with the Roulette wheel selection operator and the inertia weight controlling the search ability towards synthesizing symmetric linear array geometry with minimum side lobe level (SLL and/or nulls control. The basic cuckoo search (CS algorithm is primarily based on the natural obligate brood parasitic behavior of some cuckoo species in combination with the Levy flight behavior of some birds and fruit flies. The CS metaheuristic approach is straightforward and capable of solving effectively general N-dimensional, linear and nonlinear optimization problems. The array geometry synthesis is first formulated as an optimization problem with the goal of SLL suppression and/or null prescribed placement in certain directions, and then solved by the newly MCS algorithm for the optimum element or isotropic radiator locations in the azimuth-plane or xy-plane. The study also focuses on the four internal parameters of MCS algorithm specifically on their implicit effects in the array synthesis. The optimal inter-element spacing solutions obtained by the MCS-optimizer are validated through comparisons with the standard CS-optimizer and the conventional array within the uniform and the Dolph-Chebyshev envelope patterns using MATLABTM. Finally, we also compared the fine-tuned MCS algorithm with two popular evolutionary algorithm (EA techniques include particle swarm optimization (PSO and genetic algorithms (GA.

  11. Theory of affine projection algorithms for adaptive filtering

    CERN Document Server

    Ozeki, Kazuhiko

    2016-01-01

    This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...

  12. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  13. Adaptive protection algorithm and system

    Science.gov (United States)

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  14. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    Science.gov (United States)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  15. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    Science.gov (United States)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  16. Optimized Adaptive Perturb and Observe Maximum Power Point Tracking Control for Photovoltaic Generation

    Directory of Open Access Journals (Sweden)

    Luigi Piegari

    2015-04-01

    Full Text Available The power extracted from PV arrays is usually maximized using maximum power point tracking algorithms. One of the most widely used techniques is the perturb & observe algorithm, which periodically perturbs the operating point of the PV array, sometime with an adaptive perturbation step, and compares the PV power before and after the perturbation. This paper analyses the most suitable perturbation step to optimize maximum power point tracking performance and suggests a design criterion to select the parameters of the controller. Using this proposed adaptive step, the MPPT perturb & observe algorithm achieves an excellent dynamic response by adapting the perturbation step to the actual operating conditions of the PV array. The proposed algorithm has been validated and tested in a laboratory using a dual input inductor push-pull converter. This particular converter topology is an efficient interface to boost the low voltage of PV arrays and effectively control the power flow when input or output voltages are variable. The experimental results have proved the superiority of the proposed algorithm in comparison of traditional perturb & observe and incremental conductance techniques.

  17. Improved chemical identification from sensor arrays using intelligent algorithms

    Science.gov (United States)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  18. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    Science.gov (United States)

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  19. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Weifang Zhang

    2018-04-01

    Full Text Available A Fiber Bragg Grating (FBG interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA and advanced RISC machine (ARM platform, tunable Fabry–Perot (F–P filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  20. Time Optimized Algorithm for Web Document Presentation Adaptation

    DEFF Research Database (Denmark)

    Pan, Rong; Dolog, Peter

    2010-01-01

    Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...... content-optimized and time-optimized algorithms for information presentation adaptation for different devices based on its hierarchical model. The model is formalized in order to experiment with different algorithms.......Currently information on the web is accessed through different devices. Each device has its own properties such as resolution, size, and capabilities to display information in different format and so on. This calls for adaptation of information presentation for such platforms. This paper proposes...

  1. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon; Von Schwerin, Erik; Szepessy, Anders; Tempone, Raul

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  2. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  3. Adaptive symbiotic organisms search (SOS algorithm for structural design optimization

    Directory of Open Access Journals (Sweden)

    Ghanshyam G. Tejani

    2016-07-01

    Full Text Available The symbiotic organisms search (SOS algorithm is an effective metaheuristic developed in 2014, which mimics the symbiotic relationship among the living beings, such as mutualism, commensalism, and parasitism, to survive in the ecosystem. In this study, three modified versions of the SOS algorithm are proposed by introducing adaptive benefit factors in the basic SOS algorithm to improve its efficiency. The basic SOS algorithm only considers benefit factors, whereas the proposed variants of the SOS algorithm, consider effective combinations of adaptive benefit factors and benefit factors to study their competence to lay down a good balance between exploration and exploitation of the search space. The proposed algorithms are tested to suit its applications to the engineering structures subjected to dynamic excitation, which may lead to undesirable vibrations. Structure optimization problems become more challenging if the shape and size variables are taken into account along with the frequency. To check the feasibility and effectiveness of the proposed algorithms, six different planar and space trusses are subjected to experimental analysis. The results obtained using the proposed methods are compared with those obtained using other optimization methods well established in the literature. The results reveal that the adaptive SOS algorithm is more reliable and efficient than the basic SOS algorithm and other state-of-the-art algorithms.

  4. A software framework for pipelined arithmetic algorithms in field programmable gate arrays

    Science.gov (United States)

    Kim, J. B.; Won, E.

    2018-03-01

    Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.

  5. A new hybrid evolutionary algorithm based on new fuzzy adaptive PSO and NM algorithms for Distribution Feeder Reconfiguration

    International Nuclear Information System (INIS)

    Niknam, Taher; Azadfarsani, Ehsan; Jabbari, Masoud

    2012-01-01

    Highlights: ► Network reconfiguration is a very important way to save the electrical energy. ► This paper proposes a new algorithm to solve the DFR. ► The algorithm combines NFAPSO with NM. ► The proposed algorithm is tested on two distribution test feeders. - Abstract: Network reconfiguration for loss reduction in distribution system is a very important way to save the electrical energy. This paper proposes a new hybrid evolutionary algorithm to solve the Distribution Feeder Reconfiguration problem (DFR). The algorithm is based on combination of a New Fuzzy Adaptive Particle Swarm Optimization (NFAPSO) and Nelder–Mead simplex search method (NM) called NFAPSO–NM. In the proposed algorithm, a new fuzzy adaptive particle swarm optimization includes two parts. The first part is Fuzzy Adaptive Binary Particle Swarm Optimization (FABPSO) that determines the status of tie switches (open or close) and second part is Fuzzy Adaptive Discrete Particle Swarm Optimization (FADPSO) that determines the sectionalizing switch number. In other side, due to the results of binary PSO(BPSO) and discrete PSO(DPSO) algorithms highly depends on the values of their parameters such as the inertia weight and learning factors, a fuzzy system is employed to adaptively adjust the parameters during the search process. Moreover, the Nelder–Mead simplex search method is combined with the NFAPSO algorithm to improve its performance. Finally, the proposed algorithm is tested on two distribution test feeders. The results of simulation show that the proposed method is very powerful and guarantees to obtain the global optimization.

  6. A theoretical analysis of the median LMF adaptive algorithm

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Rusu, C.

    1999-01-01

    Higher order adaptive algorithms are sensitive to impulse interference. In the case of the LMF (Least Mean Fourth), an easy and effective way to reduce this is to median filter the instantaneous gradient of the LMF algorithm. Although previous published simulations have indicated that this reduces...... the speed of convergence, no analytical studies have yet been made to prove this. In order to enhance the usability, this paper presents a convergence and steady-state analysis of the median LMF adaptive algorithm. As expected this proves that the median LMF has a slower convergence and a lower steady...

  7. Micromirror Arrays for Adaptive Optics; TOPICAL

    International Nuclear Information System (INIS)

    Carr, E.J.

    2000-01-01

    The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ((lambda)/20), high fill factor ( and gt; 95%), large stroke (5-10(micro)m), and pixel size(approx)-200(micro)m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed

  8. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  9. Partially Adaptive STAP Algorithm Approaches to functional MRI

    OpenAIRE

    Huang, Lejian; Thompson, Elizabeth A.; Schmithorst, Vincent; Holland, Scott K.; Talavage, Thomas M.

    2008-01-01

    In this work, the architectures of three partially adaptive STAP algorithms are introduced, one of which is explored in detail, that reduce dimensionality and improve tractability over fully adaptive STAP when used in construction of brain activation maps in fMRI. Computer simulations incorporating actual MRI noise and human data analysis indicate that element space partially adaptive STAP can attain close to the performance of fully adaptive STAP while significantly decreasing processing tim...

  10. Differential Evolution Algorithm with Self-Adaptive Population Resizing Mechanism

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2013-01-01

    Full Text Available A differential evolution (DE algorithm with self-adaptive population resizing mechanism, SapsDE, is proposed to enhance the performance of DE by dynamically choosing one of two mutation strategies and tuning control parameters in a self-adaptive manner. More specifically, more appropriate mutation strategies along with its parameter settings can be determined adaptively according to the previous status at different stages of the evolution process. To verify the performance of SapsDE, 17 benchmark functions with a wide range of dimensions, and diverse complexities are used. Nonparametric statistical procedures were performed for multiple comparisons between the proposed algorithm and five well-known DE variants from the literature. Simulation results show that SapsDE is effective and efficient. It also exhibits much more superiorresultsthan the other five algorithms employed in the comparison in most of the cases.

  11. Adaptive sampling algorithm for detection of superpoints

    Institute of Scientific and Technical Information of China (English)

    CHENG Guang; GONG Jian; DING Wei; WU Hua; QIANG ShiQiang

    2008-01-01

    The superpoints are the sources (or the destinations) that connect with a great deal of destinations (or sources) during a measurement time interval, so detecting the superpoints in real time is very important to network security and management. Previous algorithms are not able to control the usage of the memory and to deliver the desired accuracy, so it is hard to detect the superpoints on a high speed link in real time. In this paper, we propose an adaptive sampling algorithm to detect the superpoints in real time, which uses a flow sample and hold module to reduce the detection of the non-superpoints and to improve the measurement accuracy of the superpoints. We also design a data stream structure to maintain the flow records, which compensates for the flow Hash collisions statistically. An adaptive process based on different sampling probabilities is used to maintain the recorded IP ad dresses in the limited memory. This algorithm is compared with the other algo rithms by analyzing the real network trace data. Experiment results and mathematic analysis show that this algorithm has the advantages of both the limited memory requirement and high measurement accuracy.

  12. Parameter identification of PEMFC model based on hybrid adaptive differential evolution algorithm

    International Nuclear Information System (INIS)

    Sun, Zhe; Wang, Ning; Bi, Yunrui; Srinivasan, Dipti

    2015-01-01

    In this paper, a HADE (hybrid adaptive differential evolution) algorithm is proposed for the identification problem of PEMFC (proton exchange membrane fuel cell). Inspired by biological genetic strategy, a novel adaptive scaling factor and a dynamic crossover probability are presented to improve the adaptive and dynamic performance of differential evolution algorithm. Moreover, two kinds of neighborhood search operations based on the bee colony foraging mechanism are introduced for enhancing local search efficiency. Through testing the benchmark functions, the proposed algorithm exhibits better performance in convergent accuracy and speed. Finally, the HADE algorithm is applied to identify the nonlinear parameters of PEMFC stack model. Through experimental comparison with other identified methods, the PEMFC model based on the HADE algorithm shows better performance. - Highlights: • We propose a hybrid adaptive differential evolution algorithm (HADE). • The search efficiency is enhanced in low and high dimension search space. • The effectiveness is confirmed by testing benchmark functions. • The identification of the PEMFC model is conducted by adopting HADE.

  13. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  14. Fast algorithm of adaptive Fourier series

    Science.gov (United States)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  15. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    Science.gov (United States)

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  16. Wavelet Adaptive Algorithm and Its Application to MRE Noise Control System

    Directory of Open Access Journals (Sweden)

    Zhang Yulin

    2015-01-01

    Full Text Available To address the limitation of conventional adaptive algorithm used for active noise control (ANC system, this paper proposed and studied two adaptive algorithms based on Wavelet. The twos are applied to a noise control system including magnetorheological elastomers (MRE, which is a smart viscoelastic material characterized by a complex modulus dependent on vibration frequency and controllable by external magnetic fields. Simulation results reveal that the Decomposition LMS algorithm (D-LMS and Decomposition and Reconstruction LMS algorithm (DR-LMS based on Wavelet can significantly improve the noise reduction performance of MRE control system compared with traditional LMS algorithm.

  17. 2-D DOA Estimation of LFM Signals Based on Dechirping Algorithm and Uniform Circle Array

    Directory of Open Access Journals (Sweden)

    K. B. Cui

    2017-04-01

    Full Text Available Based on Dechirping algorithm and uniform circle array(UCA, a new 2-D direction of arrival (DOA estimation algorithm of linear frequency modulation (LFM signals is proposed in this paper. The algorithm uses the thought of Dechirping and regards the signal to be estimated which is received by the reference sensor as the reference signal and proceeds the difference frequency treatment with the signal received by each sensor. So the signal to be estimated becomes a single-frequency signal in each sensor. Then we transform the single-frequency signal to an isolated impulse through Fourier transform (FFT and construct a new array data model based on the prominent parts of the impulse. Finally, we respectively use multiple signal classification (MUSIC algorithm and rotational invariance technique (ESPRIT algorithm to realize 2-D DOA estimation of LFM signals. The simulation results verify the effectiveness of the algorithm proposed.

  18. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    International Nuclear Information System (INIS)

    Leung, Daniel

    2011-01-01

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  19. Probe suppression in conformal phased array

    CERN Document Server

    Singh, Hema; Neethu, P S

    2017-01-01

    This book considers a cylindrical phased array with microstrip patch antenna elements and half-wavelength dipole antenna elements. The effect of platform and mutual coupling effect is included in the analysis. The non-planar geometry is tackled by using Euler's transformation towards the calculation of array manifold. Results are presented for both conducting and dielectric cylinder. The optimal weights obtained are used to generate adapted pattern according to a given signal scenario. It is shown that array along with adaptive algorithm is able to cater to an arbitrary signal environment even when the platform effect and mutual coupling is taken into account. This book provides a step-by-step approach for analyzing the probe suppression in non-planar geometry. Its detailed illustrations and analysis will be a useful text for graduate and research students, scientists and engineers working in the area of phased arrays, low-observables and stealth technology.

  20. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  1. A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization

    Directory of Open Access Journals (Sweden)

    Zhongbo Hu

    2014-01-01

    Full Text Available Many improved differential Evolution (DE algorithms have emerged as a very competitive class of evolutionary computation more than a decade ago. However, few improved DE algorithms guarantee global convergence in theory. This paper developed a convergent DE algorithm in theory, which employs a self-adaptation scheme for the parameters and two operators, that is, uniform mutation and hidden adaptation selection (haS operators. The parameter self-adaptation and uniform mutation operator enhance the diversity of populations and guarantee ergodicity. The haS can automatically remove some inferior individuals in the process of the enhancing population diversity. The haS controls the proposed algorithm to break the loop of current generation with a small probability. The breaking probability is a hidden adaptation and proportional to the changes of the number of inferior individuals. The proposed algorithm is tested on ten engineering optimization problems taken from IEEE CEC2011.

  2. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    Science.gov (United States)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  3. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  4. Partially Adaptive STAP Algorithm Approaches to functional MRI

    Science.gov (United States)

    Huang, Lejian; Thompson, Elizabeth A.; Schmithorst, Vincent; Holland, Scott K.; Talavage, Thomas M.

    2010-01-01

    In this work, the architectures of three partially adaptive STAP algorithms are introduced, one of which is explored in detail, that reduce dimensionality and improve tractability over fully adaptive STAP when used in construction of brain activation maps in fMRI. Computer simulations incorporating actual MRI noise and human data analysis indicate that element space partially adaptive STAP can attain close to the performance of fully adaptive STAP while significantly decreasing processing time and maximum memory requirements, and thus demonstrates potential in fMRI analysis. PMID:19272913

  5. Design of Circularly-Polarised, Crossed Drooping Dipole, Phased Array Antenna Using Genetic Algorithm Optimisation

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal

    2007-01-01

    A printed drooping dipole array is designed and constructed. The design is based on a genetic algorithm optimisation procedure used in conjunction with the software programme AWAS. By optimising the array G/T for specific combinations of scan angles and frequencies an optimum design is obtained...

  6. Adaptive firefly algorithm: parameter analysis and its application.

    Directory of Open Access Journals (Sweden)

    Ngaam J Cheung

    Full Text Available As a nature-inspired search algorithm, firefly algorithm (FA has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa. There are three strategies in AdaFa including (1 a distance-based light absorption coefficient; (2 a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3 five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  7. Algorithms for adaptive nonlinear pattern recognition

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric; Key, Gary

    2011-09-01

    In Bayesian pattern recognition research, static classifiers have featured prominently in the literature. A static classifier is essentially based on a static model of input statistics, thereby assuming input ergodicity that is not realistic in practice. Classical Bayesian approaches attempt to circumvent the limitations of static classifiers, which can include brittleness and narrow coverage, by training extensively on a data set that is assumed to cover more than the subtense of expected input. Such assumptions are not realistic for more complex pattern classification tasks, for example, object detection using pattern classification applied to the output of computer vision filters. In contrast, we have developed a two step process, that can render the majority of static classifiers adaptive, such that the tracking of input nonergodicities is supported. Firstly, we developed operations that dynamically insert (or resp. delete) training patterns into (resp. from) the classifier's pattern database, without requiring that the classifier's internal representation of its training database be completely recomputed. Secondly, we developed and applied a pattern replacement algorithm that uses the aforementioned pattern insertion/deletion operations. This algorithm is designed to optimize the pattern database for a given set of performance measures, thereby supporting closed-loop, performance-directed optimization. This paper presents theory and algorithmic approaches for the efficient computation of adaptive linear and nonlinear pattern recognition operators that use our pattern insertion/deletion technology - in particular, tabular nearest-neighbor encoding (TNE) and lattice associative memories (LAMs). Of particular interest is the classification of nonergodic datastreams that have noise corruption with time-varying statistics. The TNE and LAM based classifiers discussed herein have been successfully applied to the computation of object classification in hyperspectral

  8. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  9. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  10. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    Science.gov (United States)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-10-01

    We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity

  11. Adaptive Multigrid Algorithm for the Lattice Wilson-Dirac Operator

    International Nuclear Information System (INIS)

    Babich, R.; Brower, R. C.; Rebbi, C.; Brannick, J.; Clark, M. A.; Manteuffel, T. A.; McCormick, S. F.; Osborn, J. C.

    2010-01-01

    We present an adaptive multigrid solver for application to the non-Hermitian Wilson-Dirac system of QCD. The key components leading to the success of our proposed algorithm are the use of an adaptive projection onto coarse grids that preserves the near null space of the system matrix together with a simplified form of the correction based on the so-called γ 5 -Hermitian symmetry of the Dirac operator. We demonstrate that the algorithm nearly eliminates critical slowing down in the chiral limit and that it has weak dependence on the lattice volume.

  12. A new adaptive blind channel identification algorithm

    International Nuclear Information System (INIS)

    Peng Dezhong; Xiang Yong; Yi Zhang

    2009-01-01

    This paper addresses the blind identification of single-input multiple-output (SIMO) finite-impulse-response (FIR) systems. We first propose a new adaptive algorithm for the blind identification of SIMO FIR systems. Then, its convergence property is analyzed systematically. It is shown that under some mild conditions, the proposed algorithm is guaranteed to converge in the mean to the true channel impulse responses in both noisy and noiseless cases. Simulations are carried out to demonstrate the theoretical results.

  13. Multiple wall-reflection effect in adaptive-array differential-phase reflectometry on QUEST

    International Nuclear Information System (INIS)

    Idei, H.; Fujisawa, A.; Nagashima, Y.; Onchi, T.; Hanada, K.; Zushi, H.; Mishra, K.; Hamasaki, M.; Hayashi, Y.; Yamamoto, M.K.

    2016-01-01

    A phased array antenna and Software-Defined Radio (SDR) heterodyne-detection systems have been developed for adaptive array approaches in reflectometry on the QUEST. In the QUEST device considered as a large oversized cavity, standing wave (multiple wall-reflection) effect was significantly observed with distorted amplitude and phase evolution even if the adaptive array analyses were applied. The distorted fields were analyzed by Fast Fourier Transform (FFT) in wavenumber domain to treat separately the components with and without wall reflections. The differential phase evolution was properly obtained from the distorted field evolution by the FFT procedures. A frequency derivative method has been proposed to overcome the multiple-wall reflection effect, and SDR super-heterodyned components with small frequency difference for the derivative method were correctly obtained using the FFT analysis

  14. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  15. Adaptive smart simulator for characterization and MPPT construction of PV array

    International Nuclear Information System (INIS)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-01-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  16. Adaptive smart simulator for characterization and MPPT construction of PV array

    Science.gov (United States)

    Ouada, Mehdi; Meridjet, Mohamed Salah; Dib, Djalel

    2016-07-01

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  17. Adaptive smart simulator for characterization and MPPT construction of PV array

    Energy Technology Data Exchange (ETDEWEB)

    Ouada, Mehdi, E-mail: mehdi.ouada@univ-annaba.org; Meridjet, Mohamed Salah [Electromechanical engineering department, Electromechanical engineering laboratory, Badji Mokhtar University, B.P. 12, Annaba (Algeria); Dib, Djalel [Department of Electrical Engineering, University of Tebessa, Tebessa (Algeria)

    2016-07-25

    Partial shading conditions are among the most important problems in large photovoltaic array. Many works of literature are interested in modeling, control and optimization of photovoltaic conversion of solar energy under partial shading conditions, The aim of this study is to build a software simulator similar to hard simulator and to produce a shading pattern of the proposed photovoltaic array in order to use the delivered information to obtain an optimal configuration of the PV array and construct MPPT algorithm. Graphical user interfaces (Matlab GUI) are built using a developed script, this tool is easy to use, simple, and has a rapid of responsiveness, the simulator supports large array simulations that can be interfaced with MPPT and power electronic converters.

  18. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  19. Sweet Spot Control of 1:2 Array Antenna using A Modified Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Kyo-Hwan HYUN

    2007-10-01

    Full Text Available This paper presents a novel scheme that quickly searches for the sweet spot of 1:2 array antennas, and locks on to it for high-speed millimeter wavelength transmissions, when communications to another antenna array are disconnected. The proposed method utilizes a modified genetic algorithm, which selects a superior initial group through preprocessing in order to solve the local solution in a genetic algorithm. TDD (Time Division Duplex is utilized as the transfer method and data controller for the antenna. Once the initial communication is completed for the specific number of individuals, no longer antenna's data will be transmitted until each station processes GA in order to produce the next generation. After reproduction, individuals of the next generation become the data, and communication between each station is made again. The simulation results of 1:1, 1:2 array antennas, and experiment results of 1:1 array antenna confirmed the efficiency of the proposed method. The bit of gene is each 8bit, 16bit and 16bit split gene. 16bit split has similar performance as 16bit gene, but the gene of antenna is 8bit.

  20. Design and implementation of adaptive inverse control algorithm for a micro-hand control system

    Directory of Open Access Journals (Sweden)

    Wan-Cheng Wang

    2014-01-01

    Full Text Available The Letter proposes an online tuned adaptive inverse position control algorithm for a micro-hand. First, the configuration of the micro-hand is discussed. Next, a kinematic analysis of the micro-hand is investigated and then the relationship between the rotor position of micro-permanent magnet synchronous motor and the tip of the micro-finger is derived. After that, an online tuned adaptive inverse control algorithm, which includes an adaptive inverse model and an adaptive inverse control, is designed. The online tuned adaptive inverse control algorithm has better performance than the proportional–integral control algorithm does. In addition, to avoid damaging the object during the grasping process, an online force control algorithm is proposed here as well. An embedded micro-computer, cRIO-9024, is used to realise the whole position control algorithm and the force control algorithm by using software. As a result, the hardware circuit is very simple. Experimental results show that the proposed system can provide fast transient responses, good load disturbance responses, good tracking responses and satisfactory grasping responses.

  1. Adaptive phase k-means algorithm for waveform classification

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  2. Adaptive Control Algorithm of the Synchronous Generator

    Directory of Open Access Journals (Sweden)

    Shevchenko Victor

    2017-01-01

    Full Text Available The article discusses the the problem of controlling a synchronous generator, namely, maintaining the stability of the control object in the conditions of occurrence of noise and disturbances in the regulatory process. The model of a synchronous generator is represented by a system of differential equations of Park-Gorev, where state variables are computed relative to synchronously rotating d, q-axis. Management of synchronous generator is proposed to organize on the basis of the position-path control using algorithms to adapt with the reference model. Basic control law directed on the stabilizing indicators the frequency generated by the current and the required power level, which is achieved by controlling the mechanical torque on the shaft of the turbine and the value of the excitation voltage of the synchronous generator. Modification of the classic adaptation algorithm using the reference model, allowing to minimize the error of the reference regulation and the model under investigation within the prescribed limits, produced by means of the introduction of additional variables controller adaptation in the model. Сarried out the mathematical modeling of control provided influence on the studied model of continuous nonlinear and unmeasured the disturbance. Simulation results confirm the high level accuracy of tracking and adaptation investigated model with respect to the reference, and the present value of the loop error depends on parameters performance of regulator.

  3. A new normalizing algorithm for BAC CGH arrays with quality control metrics.

    Science.gov (United States)

    Miecznikowski, Jeffrey C; Gaile, Daniel P; Liu, Song; Shepherd, Lori; Nowak, Norma

    2011-01-01

    The main focus in pin-tip (or print-tip) microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH) experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose "SmoothArray", a new method to preprocess comparative genomic hybridization (CGH) bacterial artificial chromosome (BAC) arrays and we show the effects on a cancer dataset. As part of our R software package "aCGHplus," this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  4. Optimization of single channel glazed photovoltaic thermal (PVT) array using Evolutionary Algorithm (EA) and carbon credit earned by the optimized array

    International Nuclear Information System (INIS)

    Singh, Sonveer; Agrawal, Sanjay; Gadh, Rajit

    2015-01-01

    Highlights: • Optimization of SCGPVT array using Evolutionary Algorithm. • The overall exergy gain is maximized with an Evolutionary Algorithm. • Annual Performance has been evaluated for New Delhi (India). • There are improvement in results than the model given in literature. • Carbon credit analysis has been done. - Abstract: In this paper, work is carried out in three steps. In the first step, optimization of single channel glazed photovoltaic thermal (SCGPVT) array has been done with an Evolutionary Algorithm (EA) keeping the overall exergy gain is an objective function of the SCGPVT array. For maximization of overall exergy gain, total seven design variables have been optimized such as length of the channel (L), mass flow rate of flowing fluid (m_F), velocity of flowing fluid (V_F), convective heat transfer coefficient through the tedlar (U_T), overall heat transfer coefficient between solar cell to ambient through glass cover (U_S_C_A_G), overall back loss heat transfer coefficient from flowing fluid to ambient (U_F_A) and convective heat transfer coefficient of tedlar (h_T). It has been observed that the instant overall exergy gain obtained from optimized system is 1.42 kW h, which is 87.86% more than the overall exergy gain of a un-optimized system given in literature. In the second step, overall exergy gain and overall thermal gain of SCGPVT array has been evaluated annually and there are 69.52% and 88.05% improvement in annual overall exergy gain and annual overall thermal gain respectively than the un-optimized system for the same input irradiance and ambient temperature. In the third step, carbon credit earned by the optimized SCGPVT array has also been evaluated as per norms of Kyoto Protocol Bangalore climatic conditions.

  5. A Hybrid Adaptive Routing Algorithm for Event-Driven Wireless Sensor Networks

    Science.gov (United States)

    Figueiredo, Carlos M. S.; Nakamura, Eduardo F.; Loureiro, Antonio A. F.

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  6. A Novel Self-Adaptive Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Kaiping Luo

    2013-01-01

    Full Text Available The harmony search algorithm is a music-inspired optimization technology and has been successfully applied to diverse scientific and engineering problems. However, like other metaheuristic algorithms, it still faces two difficulties: parameter setting and finding the optimal balance between diversity and intensity in searching. This paper proposes a novel, self-adaptive search mechanism for optimization problems with continuous variables. This new variant can automatically configure the evolutionary parameters in accordance with problem characteristics, such as the scale and the boundaries, and dynamically select evolutionary strategies in accordance with its search performance. The new variant simplifies the parameter setting and efficiently solves all types of optimization problems with continuous variables. Statistical test results show that this variant is considerably robust and outperforms the original harmony search (HS, improved harmony search (IHS, and other self-adaptive variants for large-scale optimization problems and constrained problems.

  7. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate. (geophysics, astronomy, and astrophysics)

  8. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  10. A software sampling frequency adaptive algorithm for reducing spectral leakage

    Institute of Scientific and Technical Information of China (English)

    PAN Li-dong; WANG Fei

    2006-01-01

    Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.

  11. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  12. Comparative study of adaptive-noise-cancellation algorithms for intrusion detection systems

    International Nuclear Information System (INIS)

    Claassen, J.P.; Patterson, M.M.

    1981-01-01

    Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals

  13. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  14. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  15. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  16. Application of Hybrid Optimization Algorithm in the Synthesis of Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    Ezgi Deniz Ülker

    2014-01-01

    Full Text Available The use of hybrid algorithms for solving real-world optimization problems has become popular since their solution quality can be made better than the algorithms that form them by combining their desirable features. The newly proposed hybrid method which is called Hybrid Differential, Particle, and Harmony (HDPH algorithm is different from the other hybrid forms since it uses all features of merged algorithms in order to perform efficiently for a wide variety of problems. In the proposed algorithm the control parameters are randomized which makes its implementation easy and provides a fast response. This paper describes the application of HDPH algorithm to linear antenna array synthesis. The results obtained with the HDPH algorithm are compared with three merged optimization techniques that are used in HDPH. The comparison shows that the performance of the proposed algorithm is comparatively better in both solution quality and robustness. The proposed hybrid algorithm HDPH can be an efficient candidate for real-time optimization problems since it yields reliable performance at all times when it gets executed.

  17. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  18. EFFICIENT ADAPTIVE STEGANOGRAPHY FOR COLOR IMAGESBASED ON LSBMR ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. Sharmila

    2012-02-01

    Full Text Available Steganography is the art of hiding the fact that communication is taking place, by hiding information in other medium. Many different carrier file formats can be used, but digital images are the most popular because of their frequent use on the Internet. For hiding secret information in images, there exists a large variety of steganographic techniques. The Least Significant Bit (LSB based approach is a simplest type of steganographic algorithm. In all the existing approaches, the decision of choosing the region within a cover image is performed without considering the relationship between image content and the size of secret message. Thus, the plain regions in the cover will be ruin after data hiding even at a low data rate. Hence choosing the edge region for data hiding will be a solution. Many algorithms are deal with edges in images for data hiding. The Paper 'Edge adaptive image steganography based on LSBMR algorithm' is a LSB steganography presented the results of algorithms on gray-scale images only. This paper presents the results of analyzing the performance of edge adaptive steganography for colored images (JPEG. The algorithms have been slightly modified for colored image implementation and are compared on the basis of evaluation parameters like peak signal noise ratio (PSNR and mean square error (MSE. This method can select the edge region depending on the length of secret message and difference between two consecutive bits in the cover image. For length of message is short, only small edge regions are utilized while on leaving other region as such. When the data rate increases, more regions can be used adaptively for data hiding by adjusting the parameters. Besides this, the message is encrypted using efficient cryptographic algorithm which further increases the security.

  19. Optimization of ultrasonic arrays design and setting using a differential evolution

    International Nuclear Information System (INIS)

    Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.

    2011-01-01

    Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)

  20. An objective procedure for evaluation of adaptive antifeedback algorithms in hearing aids.

    Science.gov (United States)

    Freed, Daniel J; Soli, Sigfrid D

    2006-08-01

    This study evaluated the performance of nine adaptive antifeedback algorithms. There were two goals: first, to identify objective procedures that are useful for evaluating these algorithms, and second, to identify strengths and weaknesses of existing algorithms. The algorithms were evaluated in behind-the-ear implementations on the Knowles Electronics Manikin for Acoustic Research (KEMAR). Different acoustic conditions were created by placing a telephone handset or a hat on KEMAR. Electroacoustic techniques were devised to measure the following performance aspects of each algorithm: (1) additional gain made available before oscillation, (2) gain lost in specific frequency regions, (3) reduction of suboscillatory peaks in the frequency response, (4) speed of adaptation to changing acoustic conditions, and (5) robustness in the presence of tonal input signals. For each measurement, performance varied widely across algorithms. No single algorithm was clearly superior or inferior to the others. Generally, the feedback cancellation algorithms were less likely to sacrifice gain in specific frequency regions and better at reducing suboscillatory peaks, whereas the algorithms that used noncancellation techniques were more tolerant of tonal input signals. For those algorithms equipped with special operational modes intended for music listening, the music mode improved the response to tonal inputs but sometimes sacrificed other performance aspects. Algorithms that required an acoustic measurement for initialization purposes tended to perform poorly in acoustic conditions dissimilar to the condition in which initialization was performed. The objective methods devised for this study appear useful for evaluating the performance of adaptive antifeedback algorithms. Currently available algorithms demonstrate a wide range of performance, and further research is required to develop new algorithms that combine the best features of existing algorithms.

  1. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  2. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  3. Adaptive Watermarking Algorithm in DCT Domain Based on Chaos

    Directory of Open Access Journals (Sweden)

    Wenhao Wang

    2013-05-01

    Full Text Available In order to improve the security, robustness and invisibility of the digital watermarking, a new adaptive watermarking algorithm is proposed in this paper. Firstly, this algorithm uses chaos sequence, which Logistic chaotic mapping produces, to encrypt the watermark image. And then the original image is divided into many sub-blocks and discrete cosine transform (DCT.The watermark information is embedded into sub-blocks medium coefficients. With the features of Human Visual System (HVS and image texture sufficiently taken into account during embedding, the embedding intensity of watermark is able to adaptively adjust according to HVS and texture characteristic. The watermarking is embedded into the different sub-blocks coefficients. Experiment results haven shown that the proposed algorithm is robust against the attacks of general image processing methods, such as noise, cut, filtering and JPEG compression, and receives a good tradeoff between invisible and robustness, and better security.

  4. The fabrication techniques of Z-pinch targets. Techniques of fabricating self-adapted Z-pinch wire-arrays

    International Nuclear Information System (INIS)

    Qiu Longhui; Wei Yun; Liu Debin; Sun Zuoke; Yuan Yuping

    2002-01-01

    In order to fabricate wire arrays for use in the Z-pinch physical experiments, the fabrication techniques are investigated as follow: Thickness of about 1-1.5 μm of gold is electroplated on the surface of ultra-fine tungsten wires. Fibers of deuterated-polystyrene (DPS) with diameters from 30 to 100 microns are made from molten DPS. And two kinds of planar wire-arrays and four types of annular wire-arrays are designed, which are able to adapt to the variation of the distance between the cathode and anode inside the target chamber. Furthermore, wire-arrays with diameters form 5-24 μm are fabricated with tungsten wires, respectively. The on-site test shows that the wire-arrays can self-adapt to the distance changes perfectly

  5. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  6. Algorithms for adaptive histogram equalization

    International Nuclear Information System (INIS)

    Pizer, S.M.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K.

    1986-01-01

    Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness [Zimmerman, 1985]. However, slow speed and the overenhancement of noise it produces in relatively homogeneous regions are two problems. The authors summarize algorithms designed to overcome these and other concerns. These algorithms include interpolated ahe, to speed up the method on general purpose computers; a version of interpolated ahe designed to run in a few seconds on feedback processors; a version of full ahe designed to run in under one second on custom VLSI hardware; and clipped ahe, designed to overcome the problem of overenhancement of noise contrast. The authors conclude that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clipped ahe can be made adequately fast to be routinely applied in the normal display sequence

  7. Self-Adaptive Step Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Shuhao Yu

    2013-01-01

    Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.

  8. An Optimal Beamforming Algorithm for Phased-Array Antennas Used in Multi-Beam Spaceborne Radiometers

    DEFF Research Database (Denmark)

    Iupikov, O. A.; Ivashina, M. V.; Pontoppidan, K.

    2015-01-01

    Strict requirements for future spaceborne ocean missions using multi-beam radiometers call for new antenna technologies, such as digital beamforming phased arrays. In this paper, we present an optimal beamforming algorithm for phased-array antenna systems designed to operate as focal plane arrays...... to a FPA feeding a torus reflector antenna (designed under the contract with the European Space Agency) and tested for multiple beams. The results demonstrate an improved performance in terms of the optimized beam characteristics, yielding much higher spatial and radiometric resolution as well as much...

  9. A new adaptive GMRES algorithm for achieving high accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Sosonkina, M.; Watson, L.T.; Kapania, R.K. [Virginia Polytechnic Inst., Blacksburg, VA (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  10. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  11. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    Science.gov (United States)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  12. International Timetabling Competition 2011: An Adaptive Large Neighborhood Search algorithm

    DEFF Research Database (Denmark)

    Sørensen, Matias; Kristiansen, Simon; Stidsen, Thomas Riis

    2012-01-01

    An algorithm based on Adaptive Large Neighborhood Search (ALNS) for solving the generalized High School Timetabling problem in XHSTT-format (Post et al (2012a)) is presented. This algorithm was among the nalists of round 2 of the International Timetabling Competition 2011 (ITC2011). For problem...

  13. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  14. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    Science.gov (United States)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  15. An Improved Chaos Genetic Algorithm for T-Shaped MIMO Radar Antenna Array Optimization

    Directory of Open Access Journals (Sweden)

    Xin Fu

    2014-01-01

    Full Text Available In view of the fact that the traditional genetic algorithm easily falls into local optimum in the late iterations, an improved chaos genetic algorithm employed chaos theory and genetic algorithm is presented to optimize the low side-lobe for T-shaped MIMO radar antenna array. The novel two-dimension Cat chaotic map has been put forward to produce its initial population, improving the diversity of individuals. The improved Tent map is presented for groups of individuals of a generation with chaos disturbance. Improved chaotic genetic algorithm optimization model is established. The algorithm presented in this paper not only improved the search precision, but also avoids effectively the problem of local convergence and prematurity. For MIMO radar, the improved chaos genetic algorithm proposed in this paper obtains lower side-lobe level through optimizing the exciting current amplitude. Simulation results show that the algorithm is feasible and effective. Its performance is superior to the traditional genetic algorithm.

  16. Synthesis of Conformal Phased Antenna Arrays With A Novel Multiobjective Invasive Weed Optimization Algorithm

    Science.gov (United States)

    Li, Wen Tao; Hei, Yong Qiang; Shi, Xiao Wei

    2018-04-01

    By virtue of the excellent aerodynamic performances, conformal phased arrays have been attracting considerable attention. However, for the synthesis of patterns with low/ultra-low sidelobes of the conventional conformal arrays, the obtained dynamic range ratios of amplitude excitations could be quite high, which results in stringent requirements on various error tolerances for practical implementation. Time-modulated array (TMA) has the advantages of low sidelobe and reduced dynamic range ratio requirement of amplitude excitations. This paper takes full advantages of conformal antenna arrays and time-modulated arrays. The active-element-pattern, including element mutual coupling and platform effects, is employed in the whole design process. To optimize the pulse durations and the switch-on instants of the time-modulated elements, multiobjective invasive weed optimization (MOIWO) algorithm based on the nondominated sorting of the solutions is proposed. A S-band 8-element cylindrical conformal array is designed and a S-band 16-element cylindrical-parabolic conformal array is constructed and tested at two different steering angles.

  17. Research and Application on Fractional-Order Darwinian PSO Based Adaptive Extended Kalman Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    Qiguang Zhu

    2014-05-01

    Full Text Available To resolve the difficulty in establishing accurate priori noise model for the extended Kalman filtering algorithm, propose the fractional-order Darwinian particle swarm optimization (PSO algorithm has been proposed and introduced into the fuzzy adaptive extended Kalman filtering algorithm. The natural selection method has been adopted to improve the standard particle swarm optimization algorithm, which enhanced the diversity of particles and avoided the premature. In addition, the fractional calculus has been used to improve the evolution speed of particles. The PSO algorithm after improved has been applied to train fuzzy adaptive extended Kalman filter and achieve the simultaneous localization and mapping. The simulation results have shown that compared with the geese particle swarm optimization training of fuzzy adaptive extended Kalman filter localization and mapping algorithm, has been greatly improved in terms of localization and mapping.

  18. An Adaptive Pruning Algorithm for the Discrete L-Curve Criterion

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg; Rodriguez, Giuseppe

    2004-01-01

    SVD or regularizing CG iterations). Our algorithm needs no pre-defined parameters, and in order to capture the global features of the curve in an adaptive fashion, we use a sequence of pruned L-curves that correspond to considering the curves at different scales. We compare our new algorithm...

  19. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    Directory of Open Access Journals (Sweden)

    Xuhui Bu

    2012-01-01

    Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.

  20. Parallel Algorithm for Adaptive Numerical Integration

    International Nuclear Information System (INIS)

    Sujatmiko, M.; Basarudin, T.

    1997-01-01

    This paper presents an automation algorithm for integration using adaptive trapezoidal method. The interval is adaptively divided where the width of sub interval are different and fit to the behavior of its function. For a function f, an integration on interval [a,b] can be obtained, with maximum tolerance ε, using estimation (f, a, b, ε). The estimated solution is valid if the error is still in a reasonable range, fulfil certain criteria. If the error is big, however, the problem is solved by dividing it into to similar and independent sub problem on to separate [a, (a+b)/2] and [(a+b)/2, b] interval, i. e. ( f, a, (a+b)/2, ε/2) and (f, (a+b)/2, b, ε/2) estimations. The problems are solved in two different kinds of processor, root processor and worker processor. Root processor function ti divide a main problem into sub problems and distribute them to worker processor. The division mechanism may go further until all of the sub problem are resolved. The solution of each sub problem is then submitted to the root processor such that the solution for the main problem can be obtained. The algorithm is implemented on C-programming-base distributed computer networking system under parallel virtual machine platform

  1. Zero-crossing detection algorithm for arrays of optical spatial filtering velocimetry sensors

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Pedersen, Finn; Hanson, Steen Grüner

    2008-01-01

    This paper presents a zero-crossing detection algorithm for arrays of compact low-cost optical sensors based on spatial filtering for measuring fluctuations in angular velocity of rotating solid structures. The algorithm is applicable for signals with moderate signal-to-noise ratios, and delivers...... repeating the same measurement error for each revolution of the target, and to gain high performance measurement of angular velocity. The traditional zero-crossing detection is extended by 1) inserting an appropriate band-pass filter before the zero-crossing detection, 2) measuring time periods between zero...

  2. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  3. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  4. A numerically stable, finite memory, fast array recursive least squares algorithm for broadband active noise control

    NARCIS (Netherlands)

    van Ophem, S.; Berkhoff, Arthur P.

    2016-01-01

    For broadband active noise control applications with a rapidly changing primary path, it is desirable to find algorithms with a rapid convergence, a fast tracking performance, and a low computational cost. Recently, a promising algorithm has been presented, called the fast-array Kalman filter, which

  5. A Feed-forward Geometrical Compensation and Adaptive Feedback Control Algorithm for Hydraulic Robot Manipulators

    DEFF Research Database (Denmark)

    Conrad, Finn; Zhou, Jianjun; Gabacik, Andrzej

    1998-01-01

    Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control.......Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control....

  6. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    Directory of Open Access Journals (Sweden)

    Manuel Prado-Velasco

    2013-10-01

    Full Text Available Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results.

  7. Statistical Algorithm for the Adaptation of Detection Thresholds

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...

  8. An adaptive occlusion culling algorithm for use in large ves

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2000-01-01

    The Hierarchical Occlusion Map algorithm is combined with Frustum Slicing to give a simpler occlusion-culling algorithm that more adequately caters to large, open VEs. The algorithm adapts to the level of visual congestion and is well suited for use with large, complex models with long mean free ...... line of sight ('the great outdoors'), models for which it is not feasible to construct, or search, a database of occluders to be rendered each frame....

  9. An Adaptive Large Neighborhood Search Algorithm for the Resource-constrained Project Scheduling Problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt

    2009-01-01

    We present an application of an Adaptive Large Neighborhood Search (ALNS) algorithm to the Resource-constrained Project Scheduling Problem (RCPSP). The ALNS framework was first proposed by Pisinger and Røpke [19] and can be described as a large neighborhood search algorithm with an adaptive layer......, where a set of destroy/repair neighborhoods compete to modify the current solution in each iteration of the algorithm. Experiments are performed on the wellknown J30, J60 and J120 benchmark instances, which show that the proposed algorithm is competitive and confirms the strength of the ALNS framework...

  10. Combination Adaptive Traffic Algorithm and Coordinated Sleeping in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    M. Udin Harun Al Rasyid

    2014-12-01

    Full Text Available Wireless sensor network (WSN uses a battery as its primary power source, so that WSN will be limited to battery power for long operations. The WSN should be able to save the energy consumption in order to operate in a long time.WSN has the potential to be the future of wireless communications solutions. WSN are small but has a variety of functions that can help human life. WSN has the wide variety of sensors and can communicate quickly making it easier for people to obtain information accurately and quickly. In this study, we combine adaptive traffic algorithms and coordinated sleeping as power‐efficient WSN solution. We compared the performance of our proposed ideas combination adaptive traffic and coordinated sleeping algorithm with non‐adaptive scheme. From the simulation results, our proposed idea has good‐quality data transmission and more efficient in energy consumption, but it has higher delay than that of non‐adaptive scheme. Keywords:WSN,adaptive traffic,coordinated sleeping,beacon order,superframe order.

  11. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun; Kavusi, Sam; Salama, Khaled N.

    2012-01-01

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo

  12. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  13. A New Normalizing Algorithm for BAC CGH Arrays with Quality Control Metrics

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Miecznikowski

    2011-01-01

    Full Text Available The main focus in pin-tip (or print-tip microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose “SmoothArray”, a new method to preprocess comparative genomic hybridization (CGH bacterial artificial chromosome (BAC arrays and we show the effects on a cancer dataset. As part of our R software package “aCGHplus,” this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.

  14. Adaptive Neural Network Algorithm for Power Control in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Husam Fayiz, Al Masri

    2017-01-01

    The aim of this paper is to design, test and evaluate a prototype of an adaptive neural network algorithm for the power controlling system of a nuclear power plant. The task of power control in nuclear reactors is one of the fundamental tasks in this field. Therefore, researches are constantly conducted to ameliorate the power reactor control process. Currently, in the Department of Automation in the National Research Nuclear University (NRNU) MEPhI, numerous studies are utilizing various methodologies of artificial intelligence (expert systems, neural networks, fuzzy systems and genetic algorithms) to enhance the performance, safety, efficiency and reliability of nuclear power plants. In particular, a study of an adaptive artificial intelligent power regulator in the control systems of nuclear power reactors is being undertaken to enhance performance and to minimize the output error of the Automatic Power Controller (APC) on the grounds of a multifunctional computer analyzer (simulator) of the Water-Water Energetic Reactor known as Vodo-Vodyanoi Energetichesky Reaktor (VVER) in Russian. In this paper, a block diagram of an adaptive reactor power controller was built on the basis of an intelligent control algorithm. When implementing intelligent neural network principles, it is possible to improve the quality and dynamic of any control system in accordance with the principles of adaptive control. It is common knowledge that an adaptive control system permits adjusting the controller’s parameters according to the transitions in the characteristics of the control object or external disturbances. In this project, it is demonstrated that the propitious options for an automatic power controller in nuclear power plants is a control system constructed on intelligent neural network algorithms. (paper)

  15. Layout Optimisation of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  16. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  17. An adaptive tensor voting algorithm combined with texture spectrum

    Science.gov (United States)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  18. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  19. An adaptive clustering algorithm for image matching based on corner feature

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  20. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    Science.gov (United States)

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  1. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  2. Fourier rebinning algorithm for inverse geometry CT.

    Science.gov (United States)

    Mazin, Samuel R; Pele, Norbert J

    2008-11-01

    Inverse geometry computed tomography (IGCT) is a new type of volumetric CT geometry that employs a large array of x-ray sources opposite a smaller detector array. Volumetric coverage and high isotropic resolution produce very large data sets and therefore require a computationally efficient three-dimensional reconstruction algorithm. The purpose of this work was to adapt and evaluate a fast algorithm based on Defrise's Fourier rebinning (FORE), originally developed for positron emission tomography. The results were compared with the average of FDK reconstructions from each source row. The FORE algorithm is an order of magnitude faster than the FDK-type method for the case of 11 source rows. In the center of the field-of-view both algorithms exhibited the same resolution and noise performance. FORE exhibited some resolution loss (and less noise) in the periphery of the field-of-view. FORE appears to be a fast and reasonably accurate reconstruction method for IGCT.

  3. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    Science.gov (United States)

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  4. Performance study of LMS based adaptive algorithms for unknown system identification

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Shazia; Ahmad, Noor Atinah [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Penang (Malaysia)

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  5. Performance study of LMS based adaptive algorithms for unknown system identification

    International Nuclear Information System (INIS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment

  6. The optimal configuration of photovoltaic module arrays based on adaptive switching controls

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang; Lai, Pei-Lun; Liao, Bo-Jyun

    2015-01-01

    Highlights: • We propose a strategy for determining the optimal configuration of a PV array. • The proposed strategy was based on particle swarm optimization (PSO) method. • It can identify the optimal module array connection scheme in the event of shading. • It can also find the optimal connection of a PV array even in module malfunctions. - Abstract: This study proposes a strategy for determining the optimal configuration of photovoltaic (PV) module arrays in shading or malfunction conditions. This strategy was based on particle swarm optimization (PSO). If shading or malfunctions of the photovoltaic module array occur, the module array immediately undergoes adaptive reconfiguration to increase the power output of the PV power generation system. First, the maximal power generated at various irradiation levels and temperatures was recorded during normal array operation. Subsequently, the irradiation level and module temperature, regardless of operating conditions, were used to recall the maximal power previously recorded. This previous maximum was compared with the maximal power value obtained using the maximum power point tracker to assess whether the PV module array was experiencing shading or malfunctions. After determining that the array was experiencing shading or malfunctions, PSO was used to identify the optimal module array connection scheme in abnormal conditions, and connection switches were used to implement optimal array reconfiguration. Finally, experiments were conducted to assess the strategy for identifying the optimal reconfiguration of a PV module array in the event of shading or malfunctions

  7. Maximum power point tracking of partially shaded solar photovoltaic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Roy Chowdhury, Shubhajit; Saha, Hiranmay [IC Design and Fabrication Centre, Department of Electronics and Telecommunication Engineering, Jadavpur University (India)

    2010-09-15

    The paper presents the simulation and hardware implementation of maximum power point (MPP) tracking of a partially shaded solar photovoltaic (PV) array using a variant of Particle Swarm Optimization known as Adaptive Perceptive Particle Swarm Optimization (APPSO). Under partially shaded conditions, the photovoltaic (PV) array characteristics get more complex with multiple maxima in the power-voltage characteristic. The paper presents an algorithmic technique to accurately track the maximum power point (MPP) of a PV array using an APPSO. The APPSO algorithm has also been validated in the current work. The proposed technique uses only one pair of sensors to control multiple PV arrays. This result in lower cost and higher accuracy of 97.7% compared to earlier obtained accuracy of 96.41% using Particle Swarm Optimization. The proposed tracking technique has been mapped onto a MSP430FG4618 microcontroller for tracking and control purposes. The whole system based on the proposed has been realized on a standard two stage power electronic system configuration. (author)

  8. Research on the Random Shock Vibration Test Based on the Filter-X LMS Adaptive Inverse Control Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2016-01-01

    Full Text Available The related theory and algorithm of adaptive inverse control were presented through the research which pointed out the adaptive inverse control strategy could effectively eliminate the noise influence on the system control. Proposed using a frequency domain filter-X LMS adaptive inverse control algorithm, and the control algorithm was applied to the two-exciter hydraulic vibration test system of random shock vibration control process and summarized the process of the adaptive inverse control strategies in the realization of the random shock vibration test. The self-closed-loop and field test show that using the frequency-domain filter-X LMS adaptive inverse control algorithm can realize high precision control of random shock vibration test.

  9. A new parallel algorithm and its simulation on hypercube simulator for low pass digital image filtering using systolic array

    International Nuclear Information System (INIS)

    Al-Hallaq, A.; Amin, S.

    1998-01-01

    This paper introduces a new parallel algorithm and its simulation on a hypercube simulator for the low pass digital image filtering using a systolic array. This new algorithm is faster than the old one (Amin, 1988). This is due to the the fact that the old algorithm carries out the addition operations in a sequential mode. But in our new design these addition operations are divided into tow groups, which can be performed in parallel. One group will be performed on one half of the systolic array and the other on the second half, that is, by folding. This parallelism reduces the time required for the whole process by almost quarter the time of the old algorithm.(authors). 18 refs., 3 figs

  10. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Zheng, E-mail: 19994035@sina.com [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Wang, Jun; Zhou, Bihua [National Defense Key Laboratory on Lightning Protection and Electromagnetic Camouflage, PLA University of Science and Technology, Nanjing 210007 (China); Zhou, Shudao [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044 (China)

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  11. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    International Nuclear Information System (INIS)

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-01-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm

  12. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  13. Narrowband direction of arrival estimation for antenna arrays

    CERN Document Server

    Foutz, Jeffrey

    2008-01-01

    This book provides an introduction to narrowband array signal processing, classical and subspace-based direction of arrival (DOA) estimation with an extensive discussion on adaptive direction of arrival algorithms. The book begins with a presentation of the basic theory, equations, and data models of narrowband arrays. It then discusses basic beamforming methods and describes how they relate to DOA estimation. Several of the most common classical and subspace-based direction of arrival methods are discussed. The book concludes with an introduction to subspace tracking and shows how subspace tr

  14. A locally adaptive algorithm for shadow correction in color images

    Science.gov (United States)

    Karnaukhov, Victor; Kober, Vitaly

    2017-09-01

    The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.

  15. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio; Tamellini, Lorenzo; Tesei, Francesco; Tempone, Raul

    2016-01-01

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature

  16. Urinary Colorimetric Sensor Array and Algorithm to Distinguish Kawasaki Disease from Other Febrile Illnesses.

    Directory of Open Access Journals (Sweden)

    Zhen Li

    Full Text Available Kawasaki disease (KD is an acute pediatric vasculitis of infants and young children with unknown etiology and no specific laboratory-based test to identify. A specific molecular diagnostic test is urgently needed to support the clinical decision of proper medical intervention, preventing subsequent complications of coronary artery aneurysms. We used a simple and low-cost colorimetric sensor array to address the lack of a specific diagnostic test to differentiate KD from febrile control (FC patients with similar rash/fever illnesses.Demographic and clinical data were prospectively collected for subjects with KD and FCs under standard protocol. After screening using a genetic algorithm, eleven compounds including metalloporphyrins, pH indicators, redox indicators and solvatochromic dye categories, were selected from our chromatic compound library (n = 190 to construct a colorimetric sensor array for diagnosing KD. Quantitative color difference analysis led to a decision-tree-based KD diagnostic algorithm.This KD sensing array allowed the identification of 94% of KD subjects (receiver operating characteristic [ROC] area under the curve [AUC] 0.981 in the training set (33 KD, 33 FC and 94% of KD subjects (ROC AUC: 0.873 in the testing set (16 KD, 17 FC. Color difference maps reconstructed from the digital images of the sensing compounds demonstrated distinctive patterns differentiating KD from FC patients.The colorimetric sensor array, composed of common used chemical compounds, is an easily accessible, low-cost method to realize the discrimination of subjects with KD from other febrile illness.

  17. Frequency-Domain Adaptive Algorithm for Network Echo Cancellation in VoIP

    Directory of Open Access Journals (Sweden)

    Patrick A. Naylor

    2008-05-01

    Full Text Available We propose a new low complexity, low delay, and fast converging frequency-domain adaptive algorithm for network echo cancellation in VoIP exploiting MMax and sparse partial (SP tap-selection criteria in the frequency domain. We incorporate these tap-selection techniques into the multidelay filtering (MDF algorithm in order to mitigate the delay inherent in frequency-domain algorithms. We illustrate two such approaches and discuss their tradeoff between convergence performance and computational complexity. Simulation results show an improvement in convergence rate for the proposed algorithm over MDF and significantly reduced complexity. The proposed algorithm achieves a convergence performance close to that of the recently proposed, but substantially more complex improved proportionate MDF (IPMDF algorithm.

  18. Efficient synthesis of large-scale thinned arrays using a density-taper initialised genetic algorithm

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2011-09-01

    Full Text Available The use of the density-taper approach to initialise a genetic algorithm is shown to give excellent results in the synthesis of thinned arrays. This approach is shown to give better SLL values more consistently than using random values and difference...

  19. Hierarchical adaptive experimental design for Gaussian process emulators

    International Nuclear Information System (INIS)

    Busby, Daniel

    2009-01-01

    Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.

  20. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  1. A procedure for empirical initialization of adaptive testing algorithms

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability

  2. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  3. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  4. Control algorithms and applications of the wavefront sensorless adaptive optics

    Science.gov (United States)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  5. On flexible CAD of adaptive control and identification algorithms

    DEFF Research Database (Denmark)

    Christensen, Anders; Ravn, Ole

    1988-01-01

    a total redesign of the system within each sample. The necessary design parameters are evaluated and a decision vector is defined, from which the identification algorithm can be generated by the program. Using the decision vector, a decision-node tree structure is built up, where the nodes define......SLLAB is a MATLAB-family software package for solving control and identification problems. This paper concerns the planning of a general-purpose subroutine structure for solving identification and adaptive control problems. A general-purpose identification algorithm is suggested, which allows...

  6. Adaptive transmit selection with interference suppression

    KAUST Repository

    Radaydeh, Redha Mahmoud Mesleh

    2010-01-01

    This paper studies the performance of adaptive transmit channel selection in multipath fading channels. The adaptive selection algorithms are configured for single-antenna bandwidth-efficient or power-efficient transmission with as low transmit channel estimations as possible. Due to the fact that the number of active co-channel interfering signals and their corresponding powers experience random behavior, the adaptation to channels conditions, assuming uniform buffer and traffic loading, is proposed to be jointly based on the transmit channels instantaneous signal-to-noise ratios (SNRs) and signal-to- interference-plus- noise ratios (SINRs). Two interference cancelation algorithms, which are the dominant cancelation and the less complex arbitrary cancelation, are considered, for which the receive antenna array is assumed to have small angular spread. Analytical formulation for some performance measures in addition to several processing complexity and numerical comparisons between various adaptation schemes are presented. ©2010 IEEE.

  7. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case adaptation is the core of case based reasoning (CBR) approach that can modify the past solutions to solve new problems. It generally relies on the knowledge base and heuristics in order to achieve the required changes. It has always been a difficult process to designers within (CBR) cycle. Its difficulties can be referred to the large effort, and computational analysis needed for acquiring the knowledge's domain. To solve these problems, this research explores a new method that applying a genetic algorithm (GA) to CBR adaptation. However, it can decrease the computational complexity of determining the required changes of the problems especially those having a great amount of domain knowledge. besides, it can decrease the required time by dividing the design task into sub tasks those can be solved at the same time. Therefore, the proposed system can he practically applied for solving the complex problems. It can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. Proposed system has improved the accuracy performance of the CBR design systems

  8. A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Hui Li

    2015-01-01

    Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.

  9. Transmission History Based Distributed Adaptive Contention Window Adjustment Algorithm Cooperating with Automatic Rate Fallback for Wireless LANs

    Science.gov (United States)

    Ogawa, Masakatsu; Hiraguri, Takefumi; Nishimori, Kentaro; Takaya, Kazuhiro; Murakawa, Kazuo

    This paper proposes and investigates a distributed adaptive contention window adjustment algorithm based on the transmission history for wireless LANs called the transmission-history-based distributed adaptive contention window adjustment (THAW) algorithm. The objective of this paper is to reduce the transmission delay and improve the channel throughput compared to conventional algorithms. The feature of THAW is that it adaptively adjusts the initial contention window (CWinit) size in the binary exponential backoff (BEB) algorithm used in the IEEE 802.11 standard according to the transmission history and the automatic rate fallback (ARF) algorithm, which is the most basic algorithm in automatic rate controls. This effect is to keep CWinit at a high value in a congested state. Simulation results show that the THAW algorithm outperforms the conventional algorithms in terms of the channel throughput and delay, even if the timer in the ARF is changed.

  10. Polarization Smoothing Generalized MUSIC Algorithm with Polarization Sensitive Array for Low Angle Estimation.

    Science.gov (United States)

    Tan, Jun; Nie, Zaiping

    2018-05-12

    Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.

  11. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    Science.gov (United States)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  12. Automatically tuned adaptive differencing algorithm for 3-D SN implemented in PENTRAN

    International Nuclear Information System (INIS)

    Sjoden, G.; Courau, T.; Manalo, K.; Yi, C.

    2009-01-01

    We present an adaptive algorithm with an automated tuning feature to augment optimum differencing scheme selection for 3-D S N computations in Cartesian geometry. This adaptive differencing scheme has been implemented in the PENTRAN parallel S N code. Individual fixed zeroth spatial transport moment based schemes, including Diamond Zero (DZ), Directional Theta Weighted (DTW), and Exponential Directional Iterative (EDI) 3-D S N methods were evaluated and compared with solutions generated using a code-tuned adaptive algorithm. Model problems considered include a fixed source slab problem (using reflected y- and z-axes) which contained mixed shielding and diffusive regions, and a 17 x 17 PWR assembly eigenvalue test problem; these problems were benchmarked against multigroup MCNP5 Monte Carlo computations. Both problems were effective in highlighting the performance of the adaptive scheme compared to single schemes, and demonstrated that the adaptive tuning handles exceptions to the standard DZ-DTW-EDI adaptive strategy. The tuning feature includes special scheme selection provisions for optically thin cells, and incorporates the ratio of the angular source density relative to the total angular collision density to best select the differencing method. Overall, the adaptive scheme demonstrated the best overall solution accuracy in the test problems. (authors)

  13. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    Science.gov (United States)

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  14. Properties of the center of gravity as an algorithm for position measurements: Two-dimensional geometry

    CERN Document Server

    Landi, Gregorio

    2003-01-01

    The center of gravity as an algorithm for position measurements is analyzed for a two-dimensional geometry. Several mathematical consequences of discretization for various types of detector arrays are extracted. Arrays with rectangular, hexagonal, and triangular detectors are analytically studied, and tools are given to simulate their discretization properties. Special signal distributions free of discretized error are isolated. It is proved that some crosstalk spreads are able to eliminate the center of gravity discretization error for any signal distribution. Simulations, adapted to the CMS em-calorimeter and to a triangular detector array, are provided for energy and position reconstruction algorithms with a finite number of detectors.

  15. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

    Science.gov (United States)

    Bai, Chen; Han, Dongjuan

    2018-04-01

    MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

  16. An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Aries Pratiarso

    2015-06-01

    Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.

  17. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  18. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  19. Adaptive Equalizer Using Selective Partial Update Algorithm and Selective Regressor Affine Projection Algorithm over Shallow Water Acoustic Channels

    Directory of Open Access Journals (Sweden)

    Masoumeh Soflaei

    2014-01-01

    Full Text Available One of the most important problems of reliable communications in shallow water channels is intersymbol interference (ISI which is due to scattering from surface and reflecting from bottom. Using adaptive equalizers in receiver is one of the best suggested ways for overcoming this problem. In this paper, we apply the family of selective regressor affine projection algorithms (SR-APA and the family of selective partial update APA (SPU-APA which have low computational complexity that is one of the important factors that influences adaptive equalizer performance. We apply experimental data from Strait of Hormuz for examining the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE of SR-APA and SPU-APA decrease by 5.8 (dB and 5.5 (dB, respectively, in comparison with least mean square (LMS algorithm. Also the families of SPU-APA and SR-APA have better convergence speed than LMS type algorithm.

  20. An Adaptive Bacterial Foraging Optimization Algorithm with Lifecycle and Social Learning

    Directory of Open Access Journals (Sweden)

    Xiaohui Yan

    2012-01-01

    Full Text Available Bacterial Foraging Algorithm (BFO is a recently proposed swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. However, its optimization ability is not so good compared with other classic algorithms as it has several shortages. This paper presents an improved BFO Algorithm. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria could split, die, or migrate dynamically in the foraging processes, and population size varies as the algorithm runs. Social learning is also introduced so that the bacteria will tumble towards better directions in the chemotactic steps. Besides, adaptive step lengths are employed in chemotaxis. The new algorithm is named BFOLS and it is tested on a set of benchmark functions with dimensions of 2 and 20. Canonical BFO, PSO, and GA algorithms are employed for comparison. Experiment results and statistic analysis show that the BFOLS algorithm offers significant improvements than original BFO algorithm. Particulary with dimension of 20, it has the best performance among the four algorithms.

  1. Adaptive modification of the delayed feedback control algorithm with a continuously varying time delay

    International Nuclear Information System (INIS)

    Pyragas, V.; Pyragas, K.

    2011-01-01

    We propose a simple adaptive delayed feedback control algorithm for stabilization of unstable periodic orbits with unknown periods. The state dependent time delay is varied continuously towards the period of controlled orbit according to a gradient-descent method realized through three simple ordinary differential equations. We demonstrate the efficiency of the algorithm with the Roessler and Mackey-Glass chaotic systems. The stability of the controlled orbits is proven by computation of the Lyapunov exponents of linearized equations. -- Highlights: → A simple adaptive modification of the delayed feedback control algorithm is proposed. → It enables the control of unstable periodic orbits with unknown periods. → The delay time is varied continuously according to a gradient descend method. → The algorithm is embodied by three simple ordinary differential equations. → The validity of the algorithm is proven by computation of the Lyapunov exponents.

  2. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    Directory of Open Access Journals (Sweden)

    Zhihua Zhang

    2016-01-01

    Full Text Available Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO. Rechenberg’s 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  3. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems

    Directory of Open Access Journals (Sweden)

    Fuqing Zhao

    2016-01-01

    Full Text Available A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.

  4. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    Science.gov (United States)

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  5. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    Science.gov (United States)

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. Algorithms for selecting informative marker panels for population assignment.

    Science.gov (United States)

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  7. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  8. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  9. Adaptive linear rank tests for eQTL studies.

    Science.gov (United States)

    Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas

    2013-02-10

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.

  10. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Park, J; Lu, B; Yan, G; Park, J; Li, F; Li, J; Liu, C [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diode array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.

  11. System Realization of Broad Band Digital Beam Forming for Digital Array Radar

    Directory of Open Access Journals (Sweden)

    Wang Feng

    2013-09-01

    Full Text Available Broad band Digital Beam Forming (DBF is the key technique for the realization of Digital Array Radar (DAR. We propose the method of combination realization of the channel equalization and DBF time delay filter function by using adaptive Sample Matrix Inversion algorithm. The broad band DBF function is realized on a new DBF module based on parallel fiber optic engines and Field Program Gate Array (FPGA. Good performance is achieved when it is used to some radar products.

  12. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    Science.gov (United States)

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  13. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  14. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  15. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    Science.gov (United States)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  16. A Demosaicking Algorithm with Adaptive Inter-Channel Correlation

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2015-12-01

    Full Text Available Most common cameras use a CCD sensor device measuring a single color per pixel. Demosaicking is the interpolation process by which one can infer a full color image from such a matrix of values, thus interpolating the two missing components per pixel. Most demosaicking methods take advantage of inter-channel correlation locally selecting the best interpolation direction. The obtained results look convincing except when local geometry cannot be inferred from neighboring pixels or channel correlation is low. In these cases, these algorithms create interpolation artifacts such as zipper effect or color aliasing. This paper discusses the implementation details of the algorithm proposed in [J. Duran, A. Buades, ``Self-Similarity and Spectral Correlation Adaptive Algorithm for Color Demosaicking'', IEEE Transactions on Image Processing, 23(9, pp. 4031--4040, 2014]. The proposed method involves nonlocal image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous. It further introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of.

  17. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-15

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations.

  18. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    International Nuclear Information System (INIS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations

  19. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Robert K. Henderson

    2012-05-01

    Full Text Available We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD-based cameras for fluorescence lifetime imaging microscopy (FLIM by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast.

  20. AN ALGORITHM OF ADAPTIVE TORQUE CONTROL IN INJECTOR INTERNAL COMBUSTION ENGINE

    Directory of Open Access Journals (Sweden)

    D. N. Gerasimov

    2015-07-01

    Full Text Available Subject of Research. Internal combustion engine as a plant is a highly nonlinear complex system that works mostly in dynamic regimes in the presence of noise and disturbances. A number of engine characteristics and parameters is not known or known approximately due to the complex structure and multimode operating of the engine. In this regard the problem of torque control is not trivial and motivates the use of modern techniques of control theory that give the possibility to overcome the mentioned problems. As a consequence, a relatively simple algorithm of adaptive torque control of injector engine is proposed in the paper. Method. Proposed method is based on nonlinear dynamic model with parametric and functional uncertainties (static characteristics which are suppressed by means of adaptive control algorithm with single adjustable parameter. The algorithm is presented by proportional control law with adjustable feedback gain and provides the exponential convergence of the control error to the neighborhood of zero equilibrium. It is shown that the radius of the neighborhood can be arbitrary reduced by the change of controller design parameters. Main Results. A dynamical nonlinear model of the engine has been designed for the purpose of control synthesis and simulation of the closed-loop system. The parameters and static functions of the model are identified with the use of data aquired during Federal Test Procedure (USA of Chevrolet Tahoe vehicle with eight cylinders 5,7L engine. The algorithm of adaptive torque control is designed, and the properties of the closed-loop system are analyzed with the use of Lyapunov functions approach. The closed-loop system operating is verified by means of simulation in the MatLab/Simulink environment. Simulation results show that the controller provides the boundedness of all signals and convergence of the control error to the neighborhood of zero equilibrium despite significant variations of engine speed. The

  1. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  2. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  3. Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation

    Science.gov (United States)

    Nagarajan, Anand; Soghrati, Soheil

    2018-03-01

    A new non-iterative mesh generation algorithm named conforming to interface structured adaptive mesh refinement (CISAMR) is introduced for creating 3D finite element models of problems with complex geometries. CISAMR transforms a structured mesh composed of tetrahedral elements into a conforming mesh with low element aspect ratios. The construction of the mesh begins with the structured adaptive mesh refinement of elements in the vicinity of material interfaces. An r-adaptivity algorithm is then employed to relocate selected nodes of nonconforming elements, followed by face-swapping a small fraction of them to eliminate tetrahedrons with high aspect ratios. The final conforming mesh is constructed by sub-tetrahedralizing remaining nonconforming elements, as well as tetrahedrons with hanging nodes. In addition to studying the convergence and analyzing element-wise errors in meshes generated using CISAMR, several example problems are presented to show the ability of this method for modeling 3D problems with intricate morphologies.

  4. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    Science.gov (United States)

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  5. FPGA-Based Communications Receivers for Smart Antenna Array Embedded Systems

    Directory of Open Access Journals (Sweden)

    Millar James

    2006-01-01

    Full Text Available Field-programmable gate arrays (FPGAs are drawing ever increasing interest from designers of embedded wireless communications systems. They outpace digital signal processors (DSPs, through hardware execution of a wide range of parallelizable communications transceiver algorithms, at a fraction of the design and implementation effort and cost required for application-specific integrated circuits (ASICs. In our study, we employ an Altera Stratix FPGA development board, along with the DSP Builder software tool which acts as a high-level interface to the powerful Quartus II environment. We compare single- and multibranch FPGA-based receiver designs in terms of error rate performance and power consumption. We exploit FPGA operational flexibility and algorithm parallelism to design eigenmode-monitoring receivers that can adapt to variations in wireless channel statistics, for high-performing, inexpensive, smart antenna array embedded systems.

  6. FPGA-Based Communications Receivers for Smart Antenna Array Embedded Systems

    Directory of Open Access Journals (Sweden)

    James Millar

    2006-10-01

    Full Text Available Field-programmable gate arrays (FPGAs are drawing ever increasing interest from designers of embedded wireless communications systems. They outpace digital signal processors (DSPs, through hardware execution of a wide range of parallelizable communications transceiver algorithms, at a fraction of the design and implementation effort and cost required for application-specific integrated circuits (ASICs. In our study, we employ an Altera Stratix FPGA development board, along with the DSP Builder software tool which acts as a high-level interface to the powerful Quartus II environment. We compare single- and multibranch FPGA-based receiver designs in terms of error rate performance and power consumption. We exploit FPGA operational flexibility and algorithm parallelism to design eigenmode-monitoring receivers that can adapt to variations in wireless channel statistics, for high-performing, inexpensive, smart antenna array embedded systems.

  7. Adaptive algorithm for predicting increases in central loads of electrical energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Arbachyauskene, N A; Pushinaytis, K V

    1982-01-01

    An adaptive algorithm for predicting increases in central loads of the electrical energy system is suggested for the task of evaluating the condition. The algorithm is based on the Kalman filter. In order to calculate the coefficient of intensification, the a priori assigned noise characteristics with low accuracy are used only in the beginning of the calculation. Further, the coefficient of intensification is calculated from the innovation sequence. This approach makes it possible to correct errors in the assignment of the statistical noise characteristics and to follow their changes. The algorithm is experimentally verified.

  8. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  9. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun

    2012-10-16

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo-diode current were analyzed in detail. The proposed architecture can extend the DR for about 20N log2 dB at the high end of Photo-diode current with an N bit Up-Down counter. At the low end, it can compensate for the larger readout noise by employing Extended Counting. The Adaptive Delta-Sigma architecture employing a 4-bit Up-Down counter achieved about 160dB in the DR, with a Peak SNR (PSNR) of 80dB at the high end. Compared to the other HDR architectures, the Adaptive Delta-Sigma based architecture provides the widest DR with the best SNR performance in the extended range.

  10. An Adaptive Large Neighborhood Search Algorithm for the Multi-mode RCPSP

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt

    We present an Adaptive Large Neighborhood Search algorithm for the Multi-mode Resource-Constrained Project Scheduling Problem (MRCPSP). We incorporate techniques for deriving additional precedence relations and propose a new method, so-called mode-diminution, for removing modes during execution...

  11. Secondary Education Students' Difficulties in Algorithmic Problems with Arrays: An Analysis Using the SOLO Taxonomy

    Science.gov (United States)

    Vrachnos, Euripides; Jimoyiannis, Athanassios

    2017-01-01

    Developing students' algorithmic and computational thinking is currently a major objective for primary and secondary education in many countries around the globe. Literature suggests that students face at various difficulties in programming processes, because of their mental models about basic programming constructs. Arrays constitute the first…

  12. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  13. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  14. An Adaptive Cultural Algorithm with Improved Quantum-behaved Particle Swarm Optimization for Sonar Image Detection.

    Science.gov (United States)

    Wang, Xingmei; Hao, Wenqian; Li, Qiming

    2017-12-18

    This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.

  15. A parallel adaptive finite difference algorithm for petroleum reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Hai Minh

    2005-07-01

    Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)

  16. An Adaptive Genetic Algorithm with Dynamic Population Size for Optimizing Join Queries

    OpenAIRE

    Vellev, Stoyan

    2008-01-01

    The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-determinis...

  17. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  18. A Novel Adaptive Particle Swarm Optimization Algorithm with Foraging Behavior in Optimization Design

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2018-01-01

    Full Text Available The method of repeated trial and proofreading is generally used to the convention reducer design, but these methods is low efficiency and the size of the reducer is often large. Aiming the problems, this paper presents an adaptive particle swarm optimization algorithm with foraging behavior, in this method, the bacterial foraging process is introduced into the adaptive particle swarm optimization algorithm, which can provide the function of particle chemotaxis, swarming, reproduction, elimination and dispersal, to improve the ability of local search and avoid premature behavior. By test verification through typical function and the application of the optimization design in the structure of the reducer with discrete and continuous variables, the results are shown that the new algorithm has the advantages of good reliability, strong searching ability and high accuracy. It can be used in engineering design, and has a strong applicability.

  19. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  20. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Science.gov (United States)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  1. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Directory of Open Access Journals (Sweden)

    Yang Jian

    2007-01-01

    Full Text Available We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  2. Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data

    Science.gov (United States)

    Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick

    2017-01-01

    Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613

  3. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    Science.gov (United States)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

  4. Adaptive port-starboard beamforming of triplet arrays

    NARCIS (Netherlands)

    Beerens, S.P.; Been, R.; Groen, J.; Noutary, E.; Doisy, Y.

    2000-01-01

    Triplet arrays are single line arrays with three hydrophones on a circular section of the array. The triplet structure provides immediate port-starboard (PS) discrimination. This paper discusses the theoretical and experimental performance of triplet arrays. Results are obtained on detection gain

  5. Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.

    Science.gov (United States)

    Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L

    2018-01-01

    Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.

  6. Algorithm for real-time detection of signal patterns using phase synchrony: an application to an electrode array

    Science.gov (United States)

    Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael

    2011-02-01

    Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.

  7. Adaptive Kernel Canonical Correlation Analysis Algorithms for Nonparametric Identification of Wiener and Hammerstein Systems

    Directory of Open Access Journals (Sweden)

    Ignacio Santamaría

    2008-04-01

    Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.

  8. An adaptive left–right eigenvector evolution algorithm for vibration isolation control

    International Nuclear Information System (INIS)

    Wu, T Y

    2009-01-01

    The purpose of this research is to investigate the feasibility of utilizing an adaptive left and right eigenvector evolution (ALREE) algorithm for active vibration isolation. As depicted in the previous paper presented by Wu and Wang (2008 Smart Mater. Struct. 17 015048), the structural vibration behavior depends on both the disturbance rejection capability and mode shape distributions, which correspond to the left and right eigenvector distributions of the system, respectively. In this paper, a novel adaptive evolution algorithm is developed for finding the optimal combination of left–right eigenvectors of the vibration isolator, which is an improvement over the simultaneous left–right eigenvector assignment (SLREA) method proposed by Wu and Wang (2008 Smart Mater. Struct. 17 015048). The isolation performance index used in the proposed algorithm is defined by combining the orthogonality index of left eigenvectors and the modal energy ratio index of right eigenvectors. Through the proposed ALREE algorithm, both the left and right eigenvectors evolve such that the isolation performance index decreases, and therefore one can find the optimal combination of left–right eigenvectors of the closed-loop system for vibration isolation purposes. The optimal combination of left–right eigenvectors is then synthesized to determine the feedback gain matrix of the closed-loop system. The result of the active isolation control shows that the proposed method can be utilized to improve the vibration isolation performance compared with the previous approaches

  9. An Adaptive Test Sheet Generation Mechanism Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Huan-Yu Lin

    2012-01-01

    Full Text Available For test-sheet composition systems, it is important to adaptively compose test sheets with diverse conceptual scopes, discrimination and difficulty degrees to meet various assessment requirements during real learning situations. Computation time and item exposure rate also influence performance and item bank security. Therefore, this study proposes an Adaptive Test Sheet Generation (ATSG mechanism, where a Candidate Item Selection Strategy adaptively determines candidate test items and conceptual granularities according to desired conceptual scopes, and an Aggregate Objective Function applies Genetic Algorithm (GA to figure out the approximate solution of mixed integer programming problem for the test-sheet composition. Experimental results show that the ATSG mechanism can efficiently, precisely generate test sheets to meet the various assessment requirements than existing ones. Furthermore, according to experimental finding, Fractal Time Series approach can be applied to analyze the self-similarity characteristics of GA’s fitness scores for improving the quality of the test-sheet composition in the near future.

  10. Some improvements on adaptive genetic algorithms for reliability-related applications

    Energy Technology Data Exchange (ETDEWEB)

    Ye Zhisheng, E-mail: yez@nus.edu.s [Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119 260 (Singapore); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, beijing 100084 (China); Xie Min [Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119 260 (Singapore)

    2010-02-15

    Adaptive genetic algorithms (GAs) have been shown to be able to improve GA performance in reliability-related optimization studies. However, there are different ways to implement adaptive GAs, some of which are even in conflict with each other. In this study, a simple parameter-adjusting method using mean and variance of each generation is introduced. This method is used to compare two of such conflicting adaptive GA methods: GAs with increasing mutation rate and decreasing crossover rate and GAs with decreasing mutation rate and increasing crossover rate. The illustrative examples indicate that adaptive GAs with decreasing mutation rate and increasing crossover rate finally yield better results. Furthermore, a population disturbance method is proposed to avoid local optimum solutions. This idea is similar to exotic migration to a tribal society. To solve the problem of large solution space, a variable roughening method is also embedded into GA. Two case studies are presented to demonstrate the effectiveness of the proposed method.

  11. Some improvements on adaptive genetic algorithms for reliability-related applications

    International Nuclear Information System (INIS)

    Ye Zhisheng; Li Zhizhong; Xie Min

    2010-01-01

    Adaptive genetic algorithms (GAs) have been shown to be able to improve GA performance in reliability-related optimization studies. However, there are different ways to implement adaptive GAs, some of which are even in conflict with each other. In this study, a simple parameter-adjusting method using mean and variance of each generation is introduced. This method is used to compare two of such conflicting adaptive GA methods: GAs with increasing mutation rate and decreasing crossover rate and GAs with decreasing mutation rate and increasing crossover rate. The illustrative examples indicate that adaptive GAs with decreasing mutation rate and increasing crossover rate finally yield better results. Furthermore, a population disturbance method is proposed to avoid local optimum solutions. This idea is similar to exotic migration to a tribal society. To solve the problem of large solution space, a variable roughening method is also embedded into GA. Two case studies are presented to demonstrate the effectiveness of the proposed method.

  12. Design of 2-D Recursive Filters Using Self-adaptive Mutation Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Lianghong Wu

    2011-08-01

    Full Text Available This paper investigates a novel approach to the design of two-dimensional recursive digital filters using differential evolution (DE algorithm. The design task is reformulated as a constrained minimization problem and is solved by an Self-adaptive Mutation DE algorithm (SAMDE, which adopts an adaptive mutation operator that combines with the advantages of the DE/rand/1/bin strategy and the DE/best/2/bin strategy. As a result, its convergence performance is improved greatly. Numerical experiment results confirm the conclusion. The proposedSAMDE approach is effectively applied to test a numerical example and is compared with previous design methods. The computational experiments show that the SAMDE approach can obtain better results than previous design methods.

  13. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Jinmo Sung

    2014-01-01

    Full Text Available Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.

  14. Direction-of-Arrival Estimation for Coprime Array Using Compressive Sensing Based Array Interpolation

    Directory of Open Access Journals (Sweden)

    Aihua Liu

    2017-01-01

    Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.

  15. Application of Neuro-Wavelet Algorithm in Ultrasonic-Phased Array Nondestructive Testing of Polyethylene Pipelines

    Directory of Open Access Journals (Sweden)

    Reza Bohlouli

    2012-01-01

    Full Text Available Polyethylene (PE pipelines with electrofusion (EF joining is an essential method of transportation of gas energy. EF joints are weak points for leakage and therefore, Nondestructive testing (NDT methods including ultrasonic array technology are necessary. This paper presents a practical NDT method of fusion joints of polyethylene piping using intelligent ultrasonic image processing techniques. In the proposed method, to detect the defects of electrofusion joints, the NDT is applied based on an ANN-Wavelet method as a digital image processing technique. The proposed approach includes four steps. First an ultrasonic-phased array technique is used to provide real time images of high resolution. In the second step, the images are preprocessed by digital image processing techniques for noise reduction and detection of ROI (Region of Interest. Furthermore, to make more improvement on the images, mathematical morphology techniques such as dilation and erosion are applied. In the 3rd step, a wavelet transform is used to develop a feature vector containing 3-dimensional information on various types of defects. In the final step, all the feature vectors are classified through a backpropagation-based ANN algorithm. The obtained results show that the proposed algorithms are highly reliable and also precise for NDT monitoring.

  16. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  17. An adaptive N-body algorithm of optimal order

    CERN Document Server

    Pruett, C D; Lacy, J M

    2003-01-01

    Picard iteration is normally considered a theoretical tool whose primary utility is to establish the existence and uniqueness of solutions to first-order systems of ordinary differential equations (ODEs). However, in 1996, Parker and Sochacki [Neural, Parallel, Sci. Comput. 4 (1996)] published a practical numerical method for a certain class of ODEs, based upon modified Picard iteration, that generates the Maclaurin series of the solution to arbitrarily high order. The applicable class of ODEs consists of first-order, autonomous systems whose right-hand side functions (generators) are projectively polynomial; that is, they can be written as polynomials in the unknowns. The class is wider than might be expected. The method is ideally suited to the classical N-body problem, which is projectively polynomial. Here, we recast the N-body problem in polynomial form and develop a Picard-based algorithm for its solution. The algorithm is highly accurate, parameter-free, and simultaneously adaptive in time and order. T...

  18. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  19. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    Science.gov (United States)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  20. STEADY ESTIMATION ALGORITHMS OF THE DYNAMIC SYSTEMS CONDITION ON THE BASIS OF CONCEPTS OF THE ADAPTIVE FILTRATION AND CONTROL

    Directory of Open Access Journals (Sweden)

    H.Z. Igamberdiyev

    2014-07-01

    Full Text Available Dynamic systems condition estimation regularization algorithms in the conditions of signals and hindrances statistical characteristics aprioristic uncertainty are offered. Regular iterative algorithms of strengthening matrix factor elements of the Kalman filter, allowing to adapt the filter to changing hindrance-alarm conditions are developed. Steady adaptive estimation algorithms of a condition vector in the aprioristic uncertainty conditions of covariance matrixes of object noise and the measurements hindrances providing a certain roughness of filtration process in relation to changing statistical characteristics of signals information parameters are offered. Offered practical realization results of the dynamic systems condition estimation algorithms are given at the adaptive management systems synthesis problems solution by technological processes of granulation drying of an ammophos pulp and receiving ammonia.

  1. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    Science.gov (United States)

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  2. Radiation-induced adaptive response in fetal mice: a micro-array study

    International Nuclear Information System (INIS)

    Vares, G.; Bing, Wang; Mitsuru, Nenoi; Tetsuo, Nakajima; Kaoru, Tanaka; Isamu, Hayata

    2006-01-01

    Exposure of sublethal doses of ionizing radiation can induce protective mechanisms against a subsequent higher dose irradiation. This phenomenon called radio-adaptation (or adaptive response - AR), has been described in a wide range of biological models. In a series of studies, we demonstrated the existence of a radiation-induced AR in mice during late organogenesis. For better understanding of molecular mechanisms underlying AR in our model, we performed a global analysis of transcriptome regulations in cells collected from whole mouse fetuses. Using cDNA micro-arrays, we studied gene expression in these cells after in utero priming exposure to irradiation. Several combinations of radiation dose and dose-rate were applied to induce or not an AR in our system. Gene regulation was observed after exposure to priming radiation in each condition. Student's t-test was performed in order to identify genes whose expression modulation was specifically different in AR-inducing an( non-AR-inducing conditions. Genes were ranked according to their ability in discriminating AR-specific modulations. Since AR genes were implicated in variety of functions and cellular processes, we applied a functional classification algorithm, which clustered genes in a limited number of functionally related group: We established that AR genes are significantly enriched for specific keywords. Our results show a significant modulation of genes implicated in signal transduction pathways. No AR-specific alteration of DNA repair could be observed. Nevertheless, it is likely that modulation of DNA repair activity results, at least partly, from post-transcriptional regulation. One major hypothesis is that de-regulations of signal transduction pathways and apoptosis may be responsible for AR phenotype. In previous work, we demonstrated that radiation-induced AR in mice during organogenesis is related to Trp53 gene status and to the occurrence of radiation-induced apoptosis. Other work proposed that p53

  3. A globally convergent MC algorithm with an adaptive learning rate.

    Science.gov (United States)

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  4. Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation

    Science.gov (United States)

    Medina, H.; Mutu, R.

    2017-07-01

    An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.

  5. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    Science.gov (United States)

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    Science.gov (United States)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  7. Wavefront sensing and adaptive control in phased array of fiber collimators

    Science.gov (United States)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.

    2011-03-01

    A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change

  8. An automated algorithm for photoreceptors counting in adaptive optics retinal images

    Science.gov (United States)

    Liu, Xu; Zhang, Yudong; Yun, Dai

    2012-10-01

    Eyes are important organs of humans that detect light and form spatial and color vision. Knowing the exact number of cones in retinal image has great importance in helping us understand the mechanism of eyes' function and the pathology of some eye disease. In order to analyze data in real time and process large-scale data, an automated algorithm is designed to label cone photoreceptors in adaptive optics (AO) retinal images. Images acquired by the flood-illuminated AO system are taken to test the efficiency of this algorithm. We labeled these images both automatically and manually, and compared the results of the two methods. A 94.1% to 96.5% agreement rate between the two methods is achieved in this experiment, which demonstrated the reliability and efficiency of the algorithm.

  9. ANFIS Based Time Series Prediction Method of Bank Cash Flow Optimized by Adaptive Population Activity PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-06-01

    Full Text Available In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO algorithm combined with the least squares method (LMS to optimize the adaptive network-based fuzzy inference system (ANFIS model parameters. Through the introduction of metric function of population diversity to ensure the diversity of population and adaptive changes in inertia weight and learning factors, the optimization ability of the particle swarm optimization (PSO algorithm is improved, which avoids the premature convergence problem of the PSO algorithm. The simulation comparison experiments are carried out with BP-LMS algorithm and standard PSO-LMS by adopting real commercial banks’ cash flow data to verify the effectiveness of the proposed time series prediction of bank cash flow based on improved PSO-ANFIS optimization method. Simulation results show that the optimization speed is faster and the prediction accuracy is higher.

  10. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  11. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    Science.gov (United States)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  12. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-12-15

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  13. An implicit adaptation algorithm for a linear model reference control system

    Science.gov (United States)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  14. Self-adaptive global best harmony search algorithm applied to reactor core fuel management optimization

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.

    2013-01-01

    Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems

  15. Path Planning Algorithms for the Adaptive Sensor Fleet

    Science.gov (United States)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  16. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  17. The Improved Adaptive Silence Period Algorithm over Time-Variant Channels in the Cognitive Radio System

    Directory of Open Access Journals (Sweden)

    Jingbo Zhang

    2018-01-01

    Full Text Available In the field of cognitive radio spectrum sensing, the adaptive silence period management mechanism (ASPM has improved the problem of the low time-resource utilization rate of the traditional silence period management mechanism (TSPM. However, in the case of the low signal-to-noise ratio (SNR, the ASPM algorithm will increase the probability of missed detection for the primary user (PU. Focusing on this problem, this paper proposes an improved adaptive silence period management (IA-SPM algorithm which can adaptively adjust the sensing parameters of the current period in combination with the feedback information from the data communication with the sensing results of the previous period. The feedback information in the channel is achieved with frequency resources rather than time resources in order to adapt to the parameter change in the time-varying channel. The Monte Carlo simulation results show that the detection probability of the IA-SPM is 10–15% higher than that of the ASPM under low SNR conditions.

  18. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    Science.gov (United States)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  19. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.

    Science.gov (United States)

    Li, Yuhong; Gong, Guanghong; Li, Ni

    2018-01-01

    In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.

  20. A Parallel Adaptive Particle Swarm Optimization Algorithm for Economic/Environmental Power Dispatch

    Directory of Open Access Journals (Sweden)

    Jinchao Li

    2012-01-01

    Full Text Available A parallel adaptive particle swarm optimization algorithm (PAPSO is proposed for economic/environmental power dispatch, which can overcome the premature characteristic, the slow-speed convergence in the late evolutionary phase, and lacking good direction in particles’ evolutionary process. A search population is randomly divided into several subpopulations. Then for each subpopulation, the optimal solution is searched synchronously using the proposed method, and thus parallel computing is realized. To avoid converging to a local optimum, a crossover operator is introduced to exchange the information among the subpopulations and the diversity of population is sustained simultaneously. Simulation results show that the proposed algorithm can effectively solve the economic/environmental operation problem of hydropower generating units. Performance comparisons show that the solution from the proposed method is better than those from the conventional particle swarm algorithm and other optimization algorithms.

  1. An effective algorithm for approximating adaptive behavior in seasonal environments

    DEFF Research Database (Denmark)

    Sainmont, Julie; Andersen, Ken Haste; Thygesen, Uffe Høgsbro

    2015-01-01

    Behavior affects most aspects of ecological processes and rates, and yet modeling frameworks which efficiently predict and incorporate behavioral responses into ecosystem models remain elusive. Behavioral algorithms based on life-time optimization, adaptive dynamics or game theory are unsuited...... for large global models because of their high computational demand. We compare an easily integrated, computationally efficient behavioral algorithm known as Gilliam's rule against the solution from a life-history optimization. The approximation takes into account only the current conditions to optimize...... behavior; the so-called "myopic approximation", "short sighted", or "static optimization". We explore the performance of the myopic approximation with diel vertical migration (DVM) as an example of a daily routine, a behavior with seasonal dependence that trades off predation risk with foraging...

  2. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Science.gov (United States)

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  3. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2015-01-01

    Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  4. Novel adaptive distance relay algorithm considering the operation of 154 kV SFCL in Korean power transmission system

    International Nuclear Information System (INIS)

    Lee, S.; Lee, J.; Song, S.; Yoon, J.; Lee, B.

    2015-01-01

    Highlights: • This paper presents an outline of the new project of the 154 kV SFCL in Korea. • And then we review some protection problems for the application of 154 kV SFCLs. • This paper proposes a new adaptive protection algorithm for 154 kV SFCLs. • The developed algorithm is tested in a simple distance relay system. - Abstract: In general, SFCLs can have a negative impact on the protective coordination in power transmission system because of the variable impedance of SFCLs. It is very important to solve the protection problems of the power system for the successful application of SFCLs to real power transmission system. This paper reviews some protection problems which can be caused by the application of 154 kV SFCLs to power transmission systems in South Korea. And then we propose an adaptive protection algorithm to solve the problems. The adaptive protection algorithm uses the real time information of the SFCL system operation.

  5. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    Science.gov (United States)

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  6. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    Directory of Open Access Journals (Sweden)

    Tongxin Shu

    2017-11-01

    Full Text Available Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA, while achieving around the same Normalized Mean Error (NME, DDASA is superior in saving 5.31% more battery energy.

  7. A New Fuzzy Harmony Search Algorithm Using Fuzzy Logic for Dynamic Parameter Adaptation

    Directory of Open Access Journals (Sweden)

    Cinthia Peraza

    2016-10-01

    Full Text Available In this paper, a new fuzzy harmony search algorithm (FHS for solving optimization problems is presented. FHS is based on a recent method using fuzzy logic for dynamic adaptation of the harmony memory accepting (HMR and pitch adjustment (PArate parameters that improve the convergence rate of traditional harmony search algorithm (HS. The objective of the method is to dynamically adjust the parameters in the range from 0.7 to 1. The impact of using fixed parameters in the harmony search algorithm is discussed and a strategy for efficiently tuning these parameters using fuzzy logic is presented. The FHS algorithm was successfully applied to different benchmarking optimization problems. The results of simulation and comparison studies demonstrate the effectiveness and efficiency of the proposed approach.

  8. Improving source discrimination performance by using an optimized acoustic array and adaptive high-resolution CLEAN-SC beamforming

    NARCIS (Netherlands)

    Luesutthiviboon, S.; Malgoezar, A.M.N.; Snellen, M.; Sijtsma, P.; Simons, D.G.

    2018-01-01

    Beamforming performance can be improved in two ways: optimizing the location of microphones on the acoustic array and applying advanced beamforming algorithms. In this study, the effects of the two approaches are studied. An optimization method is developed to optimize the location of microphones

  9. Improving the Stability of the LMF Adaptive Algorithm Using the Median Filteer

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Rusu, Corneliu

    1998-01-01

    environments and secondly it enables the use of larger step-size of the adaptive algorithm especially when the signals are corrupted by noise. The disadvantages are a small raise in the computational complexity and slower convergence than the LMF. Two examples are given which illustrates the behavior...

  10. Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms

    International Nuclear Information System (INIS)

    Talpaert, Arthur

    2017-01-01

    This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr

  11. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  12. 基于内插阵列变换的非圆信号MUSIC算法%Non-Circular MUSIC Algorithm Based on Virtual Interpolated Array

    Institute of Scientific and Technical Information of China (English)

    陈浩; 宋爱民; 刘剑

    2012-01-01

    针对非圆信号的波达方向(DOA)估计问题,提出一种基于内插阵列变换的非圆信号MUSIC算法(VIA-NC-MUSIC算法).利用真实阵列流型与虚拟阵列流型之间的变换矩阵,将真实协方差矩阵变换为虚拟协方差矩阵,再对虚拟协方差矩阵进行奇异值分解(SVD),利用信号子空间与噪声子空间的正交性,得出算法的空间谱函数.仿真实验表明:存在阵元位置误差的情况下,新算法通过对阵元位置校准数据进行内插阵列变换(VIA),取得与阵元位置校准的非圆信号MUSIC算法(NC-MUSIC算法)相当的估计性能,保持了高估计精度、阵列扩展能力等优点.%A non-circular MUSIC (multiple signal classification) algorithm based on virtual interpolated array (VIA-INC-MUSIC algorithm) is proposed to estimate the direction of arrival (DOA) problem of non-circular signals. By utilizing transformation matrix which is obtained through the real array manifold and virtual array manifold, the real covariance matrix can be converted to virtual covariance matrix, and the spatial spectrum function can be obtained by orthogonality of signal subspace and noise subspace after singularity value decomposition (SVD) of virtual covariance matrix. Simulation results show that if sensor position errors exist, the performance of the new algorithm is similar to the calibrated non-circular MUSIC algorithm ( NC-MU-SIC algorithm) by using virtual interpolated array (VIA) for the calibrated sensor position data, and this algorithm also keeps the performance in array extension and high accuracy.

  13. A Hybrid Fuzzy Genetic Algorithm for an Adaptive Traffic Signal System

    Directory of Open Access Journals (Sweden)

    S. M. Odeh

    2015-01-01

    Full Text Available This paper presents a hybrid algorithm that combines Fuzzy Logic Controller (FLC and Genetic Algorithms (GAs and its application on a traffic signal system. FLCs have been widely used in many applications in diverse areas, such as control system, pattern recognition, signal processing, and forecasting. They are, essentially, rule-based systems, in which the definition of these rules and fuzzy membership functions is generally based on verbally formulated rules that overlap through the parameter space. They have a great influence over the performance of the system. On the other hand, the Genetic Algorithm is a metaheuristic that provides a robust search in complex spaces. In this work, it has been used to adapt the decision rules of FLCs that define an intelligent traffic signal system, obtaining a higher performance than a classical FLC-based control. The simulation results yielded by the hybrid algorithm show an improvement of up to 34% in the performance with respect to a standard traffic signal controller, Conventional Traffic Signal Controller (CTC, and up to 31% in the comparison with a traditional logic controller, FLC.

  14. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  15. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  16. Local adaptive mesh refinement for shock hydrodynamics

    International Nuclear Information System (INIS)

    Berger, M.J.; Colella, P.; Lawrence Livermore Laboratory, Livermore, 94550 California)

    1989-01-01

    The aim of this work is the development of an automatic, adaptive mesh refinement strategy for solving hyperbolic conservation laws in two dimensions. There are two main difficulties in doing this. The first problem is due to the presence of discontinuities in the solution and the effect on them of discontinuities in the mesh. The second problem is how to organize the algorithm to minimize memory and CPU overhead. This is an important consideration and will continue to be important as more sophisticated algorithms that use data structures other than arrays are developed for use on vector and parallel computers. copyright 1989 Academic Press, Inc

  17. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    Science.gov (United States)

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  18. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    International Nuclear Information System (INIS)

    Rolland, Joran; Simonnet, Eric

    2015-01-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

  19. Adaptation of a fuzzy controller’s scaling gains using genetic algorithms for balancing an inverted pendulum

    Directory of Open Access Journals (Sweden)

    Duka Adrian-Vasile

    2011-12-01

    Full Text Available This paper examines the development of a genetic adaptive fuzzy control system for the Inverted Pendulum. The inverted pendulum is a classical problem in Control Engineering, used for testing different control algorithms. The goal is to balance the inverted pendulum in the upright position by controlling the horizontal force applied to its cart. Because it is unstable and has a complicated nonlinear dynamics, the inverted pendulum is a good testbed for the development of nonconventional advanced control techniques. Fuzzy logic technique has been successfully applied to control this type of system, however most of the time the design of the fuzzy controller is done in an ad-hoc manner, and choosing certain parameters (controller gains, membership functions proves difficult. This paper examines the implementation of an adaptive control method based on genetic algorithms (GA, which can be used on-line to produce the adaptation of the fuzzy controller’s gains in order to achieve the stabilization of the pendulum. The performances of the proposed control algorithms are evaluated and shown by means of digital simulation.

  20. An improved self-adaptive ant colony algorithm based on genetic strategy for the traveling salesman problem

    Science.gov (United States)

    Wang, Pan; Zhang, Yi; Yan, Dong

    2018-05-01

    Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.

  1. A Distance-Adaptive Refueling Recommendation Algorithm for Self-Driving Travel

    Directory of Open Access Journals (Sweden)

    Quanli Xu

    2018-03-01

    Full Text Available Taking the maximum vehicle driving distance, the distances from gas stations, the route length, and the number of refueling gas stations as the decision conditions, recommendation rules and an early refueling service warning mechanism for gas stations along a self-driving travel route were constructed by using the algorithm presented in this research, based on the spatial clustering characteristics of gas stations and the urgency of refueling. Meanwhile, by combining ArcEngine and Matlab capabilities, a scenario simulation system of refueling for self-driving travel was developed by using c#.net in order to validate and test the accuracy and applicability of the algorithm. A total of nine testing schemes with four simulation scenarios were designed and executed using this algorithm, and all of the simulation results were consistent with expectations. The refueling recommendation algorithm proposed in this study can automatically adapt to changes in the route length of self-driving travel, the maximum driving distance of the vehicle, and the distance from gas stations, which could provide variable refueling recommendation strategies according to differing gas station layouts along the route. Therefore, the results of this study could provide a scientific reference for the reasonable planning and timely supply of vehicle refueling during self-driving travel.

  2. A comparison of two adaptive algorithms for the control of active engine mounts

    Science.gov (United States)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  3. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.

    Science.gov (United States)

    Kim, Jinkwon; Min, Se Dong; Lee, Myoungho

    2011-06-27

    Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  4. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects

    Directory of Open Access Journals (Sweden)

    Min Se Dong

    2011-06-01

    Full Text Available Abstract Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  5. Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Emer Bernal

    2017-01-01

    Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.

  6. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  7. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  8. A low-complexity joint 2D-DOD and 2D-DOA estimation algorithm for MIMO radar with arbitrary arrays

    Science.gov (United States)

    Chen, Chen; Zhang, Xiaofei

    2013-10-01

    In this article, we study the problem of four-dimensional angles estimation for bistatic multiple-input multiple-output (MIMO) radar with arbitrary arrays, and propose a joint two-dimensional direction of departure (2D-DOD) and two-dimensional direction of arrival (2D-DOA) estimation algorithm. Our algorithm is to extend the propagator method (PM) for angle estimation in MIMO radar. The proposed algorithm does not require peak searching and eigenvalue decomposition of received signal covariance matrix, because of this, it has low computational complexity. And it can achieve automatic pairing of four-dimensional angles. Furthermore, the proposed algorithm has much better angle estimation performance than interpolated estimation method of signal parameters via rotational invariance techniques (ESPRIT), and has very close angle estimation performance to ESPRIT-like algorithm which has higher computational cost than the proposed algorithm. We also analyze the complexity and angle estimation error of the algorithm, and derive the Cramer-Rao bound (CRB). The simulation results verify the effectiveness and improvement of the proposed algorithm.

  9. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  10. Adaptive local backlight dimming algorithm based on local histogram and image characteristics

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari

    2013-01-01

    -off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.......Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image......, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted...

  11. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    Science.gov (United States)

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Trans gene regulation in adaptive evolution: a genetic algorithm model.

    Science.gov (United States)

    Behera, N; Nanjundiah, V

    1997-09-21

    This is a continuation of earlier studies on the evolution of infinite populations of haploid genotypes within a genetic algorithm framework. We had previously explored the evolutionary consequences of the existence of indeterminate-"plastic"-loci, where a plastic locus had a finite probability in each generation of functioning (being switched "on") or not functioning (being switched "off"). The relative probabilities of the two outcomes were assigned on a stochastic basis. The present paper examines what happens when the transition probabilities are biased by the presence of regulatory genes. We find that under certain conditions regulatory genes can improve the adaptation of the population and speed up the rate of evolution (on occasion at the cost of lowering the degree of adaptation). Also, the existence of regulatory loci potentiates selection in favour of plasticity. There is a synergistic effect of regulatory genes on plastic alleles: the frequency of such alleles increases when regulatory loci are present. Thus, phenotypic selection alone can be a potentiating factor in a favour of better adaptation. Copyright 1997 Academic Press Limited.

  13. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  14. Modifications of the branch-and-bound algorithm for application in constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2000-01-01

    A mathematical programming approach is presented for computer adaptive testing (CAT) with many constraints on the item and test attributes. Because mathematical programming problems have to be solved while the examinee waits for the next item, a fast implementation of the Branch-and-Bound algorithm

  15. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    Science.gov (United States)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  16. An adaptive N-body algorithm of optimal order

    International Nuclear Information System (INIS)

    Pruett, C. David.; Rudmin, Joseph W.; Lacy, Justin M.

    2003-01-01

    Picard iteration is normally considered a theoretical tool whose primary utility is to establish the existence and uniqueness of solutions to first-order systems of ordinary differential equations (ODEs). However, in 1996, Parker and Sochacki [Neural, Parallel, Sci. Comput. 4 (1996)] published a practical numerical method for a certain class of ODEs, based upon modified Picard iteration, that generates the Maclaurin series of the solution to arbitrarily high order. The applicable class of ODEs consists of first-order, autonomous systems whose right-hand side functions (generators) are projectively polynomial; that is, they can be written as polynomials in the unknowns. The class is wider than might be expected. The method is ideally suited to the classical N-body problem, which is projectively polynomial. Here, we recast the N-body problem in polynomial form and develop a Picard-based algorithm for its solution. The algorithm is highly accurate, parameter-free, and simultaneously adaptive in time and order. Test cases for both benign and chaotic N-body systems reveal that optimal order is dynamic. That is, in addition to dependency upon N and the desired accuracy, optimal order depends upon the configuration of the bodies at any instant

  17. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    Science.gov (United States)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  18. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    Science.gov (United States)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  19. A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment

    International Nuclear Information System (INIS)

    Jiang Chuanwen; Bompard, Etorre

    2005-01-01

    This paper proposes a short term hydroelectric plant dispatch model based on the rule of maximizing the benefit. For the optimal dispatch model, which is a large scale nonlinear planning problem with multi-constraints and multi-variables, this paper proposes a novel self-adaptive chaotic particle swarm optimization algorithm to solve the short term generation scheduling of a hydro-system better in a deregulated environment. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed approach introduces chaos mapping and an adaptive scaling term into the particle swarm optimization algorithm, which increases its convergence rate and resulting precision. The new method has been examined and tested on a practical hydro-system. The results are promising and show the effectiveness and robustness of the proposed approach in comparison with the traditional particle swarm optimization algorithm

  20. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    International Nuclear Information System (INIS)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-01-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis. (paper)

  1. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    Science.gov (United States)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-10-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.

  2. Closed-Form Algorithm for 3-D Near-Field OFDM Signal Localization under Uniform Circular Array.

    Science.gov (United States)

    Su, Xiaolong; Liu, Zhen; Chen, Xin; Wei, Xizhang

    2018-01-14

    Due to its widespread application in communications, radar, etc., the orthogonal frequency division multiplexing (OFDM) signal has become increasingly urgent in the field of localization. Under uniform circular array (UCA) and near-field conditions, this paper presents a closed-form algorithm based on phase difference for estimating the three-dimensional (3-D) location (azimuth angle, elevation angle, and range) of the OFDM signal. In the algorithm, considering that it is difficult to distinguish the frequency of the OFDM signal's subcarriers and the phase-based method is always affected by errors of the frequency estimation, this paper employs sparse representation (SR) to obtain the super-resolution frequencies and the corresponding phases of subcarriers. Further, as the phase differences of the adjacent sensors including azimuth angle, elevation angle and range parameters can be expressed as indefinite equations, the near-field OFDM signal's 3-D location is obtained by employing the least square method, where the phase differences are based on the average of the estimated subcarriers. Finally, the performance of the proposed algorithm is demonstrated by several simulations.

  3. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    Science.gov (United States)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  4. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    Science.gov (United States)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  5. Adaptive Kalman filter based state of charge estimation algorithm for lithium-ion battery

    International Nuclear Information System (INIS)

    Zheng Hong; Liu Xu; Wei Min

    2015-01-01

    In order to improve the accuracy of the battery state of charge (SOC) estimation, in this paper we take a lithium-ion battery as an example to study the adaptive Kalman filter based SOC estimation algorithm. Firstly, the second-order battery system model is introduced. Meanwhile, the temperature and charge rate are introduced into the model. Then, the temperature and the charge rate are adopted to estimate the battery SOC, with the help of the parameters of an adaptive Kalman filter based estimation algorithm model. Afterwards, it is verified by the numerical simulation that in the ideal case, the accuracy of SOC estimation can be enhanced by adding two elements, namely, the temperature and charge rate. Finally, the actual road conditions are simulated with ADVISOR, and the simulation results show that the proposed method improves the accuracy of battery SOC estimation under actual road conditions. Thus, its application scope in engineering is greatly expanded. (paper)

  6. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2014-01-06

    A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  7. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.; Hoel, Haakon; Tempone, Raul

    2014-01-01

    A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  8. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.

    Science.gov (United States)

    Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.

  9. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  10. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    Science.gov (United States)

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  11. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  12. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  13. Nuclear fuel management optimization using adaptive evolutionary algorithms with heuristics

    International Nuclear Information System (INIS)

    Axmann, J.K.; Van de Velde, A.

    1996-01-01

    Adaptive Evolutionary Algorithms in combination with expert knowledge encoded in heuristics have proved to be a robust and powerful optimization method for the design of optimized PWR fuel loading pattern. Simple parallel algorithmic structures coupled with a low amount of communications between computer processor units in use makes it possible for workstation clusters to be employed efficiently. The extension of classic evolution strategies not only by new and alternative methods but also by the inclusion of heuristics with effects on the exchange probabilities of the fuel assemblies at specific core positions leads to the RELOPAT optimization code of the Technical University of Braunschweig. In combination with the new, neutron-physical 3D nodal core simulator PRISM developed by SIEMENS the PRIMO loading pattern optimization system has been designed. Highly promising results in the recalculation of known reload plans for German PWR's new lead to a commercially usable program. (author)

  14. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  15. A novel adaptive joint power control algorithm with channel estimation in a CDMA cellular system

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Joint power control has advantages of multi-user detection and power control; and it can combat the multi-access interference and the near-far problem. A novel adaptive joint power control algorithm with channel estimation in a CDMA cellular system was designed. Simulation results show that the algorithm can control the power not only quickly but also precisely with a time change. The method is useful for increasing system capacity.

  16. Multi-objective optimization of p-xylene oxidation process using an improved self-adaptive differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Lili Tao; Bin Xu; Zhihua Hu; Weimin Zhong

    2017-01-01

    The rise in the use of global polyester fiber contributed to strong demand of the Terephthalic acid (TPA). The liquid-phase catalytic oxidation of p-xylene (PX) to TPA is regarded as a critical and efficient chemical process in industry [1]. PX oxidation reaction involves many complex side reactions, among which acetic acid combustion and PX combustion are the most important. As the target product of this oxidation process, the quality and yield of TPA are of great concern. However, the improvement of the qualified product yield can bring about the high energy consumption, which means that the economic objectives of this process cannot be achieved simulta-neously because the two objectives are in conflict with each other. In this paper, an improved self-adaptive multi-objective differential evolution algorithm was proposed to handle the multi-objective optimization prob-lems. The immune concept is introduced to the self-adaptive multi-objective differential evolution algorithm (SADE) to strengthen the local search ability and optimization accuracy. The proposed algorithm is successfully tested on several benchmark test problems, and the performance measures such as convergence and divergence metrics are calculated. Subsequently, the multi-objective optimization of an industrial PX oxidation process is carried out using the proposed immune self-adaptive multi-objective differential evolution algorithm (ISADE). Optimization results indicate that application of ISADE can greatly improve the yield of TPA with low combustion loss without degenerating TA quality.

  17. Adaptation of the Biolog Phenotype MicroArrayTM Technology to Profile the Obligate Anaerobe Geobacter metallireducens

    Energy Technology Data Exchange (ETDEWEB)

    Joyner, Dominique; Fortney, Julian; Chakraborty, Romy; Hazen, Terry

    2010-05-17

    The Biolog OmniLog? Phenotype MicroArray (PM) plate technology was successfully adapted to generate a select phenotypic profile of the strict anaerobe Geobacter metallireducens (G.m.). The profile generated for G.m. provides insight into the chemical sensitivity of the organism as well as some of its metabolic capabilities when grown with a basal medium containing acetate and Fe(III). The PM technology was developed for aerobic organisms. The reduction of a tetrazolium dye by the test organism represents metabolic activity on the array which is detected and measured by the OmniLog(R) system. We have previously adapted the technology for the anaerobic sulfate reducing bacterium Desulfovibrio vulgaris. In this work, we have taken the technology a step further by adapting it for the iron reducing obligate anaerobe Geobacter metallireducens. In an osmotic stress microarray it was determined that the organism has higher sensitivity to impermeable solutes 3-6percent KCl and 2-5percent NaNO3 that result in osmotic stress by osmosis to the cell than to permeable non-ionic solutes represented by 5-20percent ethylene glycol and 2-3percent urea. The osmotic stress microarray also includes an array of osmoprotectants and precursor molecules that were screened to identify substrates that would provide osmotic protection to NaCl stress. None of the substrates tested conferred resistance to elevated concentrations of salt. Verification studies in which G.m. was grown in defined medium amended with 100mM NaCl (MIC) and the common osmoprotectants betaine, glycine and proline supported the PM findings. Further verification was done by analysis of transcriptomic profiles of G.m. grown under 100mM NaCl stress that revealed up-regulation of genes related to degradation rather than accumulation of the above-mentioned osmoprotectants. The phenotypic profile, supported by additional analysis indicates that the accumulation of these osmoprotectants as a response to salt stress does not

  18. An empirical study on SAJQ (Sorting Algorithm for Join Queries

    Directory of Open Access Journals (Sweden)

    Hassan I. Mathkour

    2010-06-01

    Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.

  19. An enhanced block matching algorithm for fast elastic registration in adaptive radiotherapy

    International Nuclear Information System (INIS)

    Malsch, U; Thieke, C; Huber, P E; Bendl, R

    2006-01-01

    Image registration has many medical applications in diagnosis, therapy planning and therapy. Especially for time-adaptive radiotherapy, an efficient and accurate elastic registration of images acquired for treatment planning, and at the time of the actual treatment, is highly desirable. Therefore, we developed a fully automatic and fast block matching algorithm which identifies a set of anatomical landmarks in a 3D CT dataset and relocates them in another CT dataset by maximization of local correlation coefficients in the frequency domain. To transform the complete dataset, a smooth interpolation between the landmarks is calculated by modified thin-plate splines with local impact. The concept of the algorithm allows separate processing of image discontinuities like temporally changing air cavities in the intestinal track or rectum. The result is a fully transformed 3D planning dataset (planning CT as well as delineations of tumour and organs at risk) to a verification CT, allowing evaluation and, if necessary, changes of the treatment plan based on the current patient anatomy without time-consuming manual re-contouring. Typically the total calculation time is less than 5 min, which allows the use of the registration tool between acquiring the verification images and delivering the dose fraction for online corrections. We present verifications of the algorithm for five different patient datasets with different tumour locations (prostate, paraspinal and head-and-neck) by comparing the results with manually selected landmarks, visual assessment and consistency testing. It turns out that the mean error of the registration is better than the voxel resolution (2 x 2 x 3 mm 3 ). In conclusion, we present an algorithm for fully automatic elastic image registration that is precise and fast enough for online corrections in an adaptive fractionated radiation treatment course

  20. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2013-12-01

    The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  1. A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.

    Science.gov (United States)

    Kallel, Fathi; Ben Hamida, Ahmed

    2017-12-01

    The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.

  2. Phase retrieval in near-field measurements by array synthesis

    DEFF Research Database (Denmark)

    Wu, Jian; Larsen, Flemming Holm

    1991-01-01

    The phase retrieval problem in near-field antenna measurements is formulated as an array synthesis problem. As a test case, a particular synthesis algorithm has been used to retrieve the phase of a linear array......The phase retrieval problem in near-field antenna measurements is formulated as an array synthesis problem. As a test case, a particular synthesis algorithm has been used to retrieve the phase of a linear array...

  3. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    Science.gov (United States)

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  4. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm

    Directory of Open Access Journals (Sweden)

    Yung-Yue Chen

    2018-05-01

    Full Text Available Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H2 estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. Finite element analysis and genetic algorithm optimization design for the actuator placement on a large adaptive structure

    Science.gov (United States)

    Sheng, Lizeng

    The dissertation focuses on one of the major research needs in the area of adaptive/intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures---optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms

  7. ALE-PSO: An Adaptive Swarm Algorithm to Solve Design Problems of Laminates

    Directory of Open Access Journals (Sweden)

    Paolo Vannucci

    2009-04-01

    Full Text Available This paper presents an adaptive PSO algorithm whose numerical parameters can be updated following a scheduled protocol respecting some known criteria of convergence in order to enhance the chances to reach the global optimum of a hard combinatorial optimization problem, such those encountered in global optimization problems of composite laminates. Some examples concerning hard design problems are provided, showing the effectiveness of the approach.

  8. Adaptively optimizing stochastic resonance in visual system

    Science.gov (United States)

    Yang, Tao

    1998-08-01

    Recent psychophysics experiment has showed that the noise strength could affect the perceived image quality. This work gives an adaptive process for achieving the optimal perceived image quality in a simple image perception array, which is a simple model of an image sensor. A reference image from memory is used for constructing a cost function and defining the optimal noise strength where the cost function gets its minimum point. The reference image is a binary image, which is used to define the background and the object. Finally, an adaptive algorithm is proposed for searching the optimal noise strength. Computer experimental results show that if the reference image is a thresholded version of the sub-threshold input image then the output of the sensor array gives an optimal output, in which the background and the object have the biggest contrast. If the reference image is different from a thresholded version of the sub-threshold input image then the output usually gives a sub-optimal contrast between the object and the background.

  9. Beam pattern improvement by compensating array nonuniformities in a guided wave phased array

    International Nuclear Information System (INIS)

    Kwon, Hyu-Sang; Lee, Seung-Seok; Kim, Jin-Yeon

    2013-01-01

    This paper presents a simple data processing algorithm which can improve the performance of a uniform circular array based on guided wave transducers. The algorithm, being intended to be used with the delay-and-sum beamformer, effectively eliminates the effects of nonuniformities that can significantly degrade the beam pattern. Nonuniformities can arise intrinsically from the array geometry when the circular array is transformed to a linear array for beam steering and extrinsically from unequal conditions of transducers such as element-to-element variations of sensitivity and directivity. The effects of nonuniformities are compensated by appropriately imposing weight factors on the elements in the projected linear array. Different cases are simulated, where the improvements of the beam pattern, especially the level of the highest sidelobe, are clearly seen, and related issues are discussed. An experiment is performed which uses A0 mode Lamb waves in a steel plate, to demonstrate the usefulness of the proposed method. The discrepancy between theoretical and experimental beam patterns is explained by accounting for near-field effects. (paper)

  10. SA-SSR: a suffix array-based algorithm for exhaustive and efficient SSR discovery in large genetic sequences.

    Science.gov (United States)

    Pickett, B D; Karlinsey, S M; Penrod, C E; Cormier, M J; Ebbert, M T W; Shiozawa, D K; Whipple, C J; Ridge, P G

    2016-09-01

    Simple Sequence Repeats (SSRs) are used to address a variety of research questions in a variety of fields (e.g. population genetics, phylogenetics, forensics, etc.), due to their high mutability within and between species. Here, we present an innovative algorithm, SA-SSR, based on suffix and longest common prefix arrays for efficiently detecting SSRs in large sets of sequences. Existing SSR detection applications are hampered by one or more limitations (i.e. speed, accuracy, ease-of-use, etc.). Our algorithm addresses these challenges while being the most comprehensive and correct SSR detection software available. SA-SSR is 100% accurate and detected >1000 more SSRs than the second best algorithm, while offering greater control to the user than any existing software. SA-SSR is freely available at http://github.com/ridgelab/SA-SSR CONTACT: perry.ridge@byu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. A parallel adaptive mesh refinement algorithm for predicting turbulent non-premixed combusting flows

    International Nuclear Information System (INIS)

    Gao, X.; Groth, C.P.T.

    2005-01-01

    A parallel adaptive mesh refinement (AMR) algorithm is proposed for predicting turbulent non-premixed combusting flows characteristic of gas turbine engine combustors. The Favre-averaged Navier-Stokes equations governing mixture and species transport for a reactive mixture of thermally perfect gases in two dimensions, the two transport equations of the κ-ψ turbulence model, and the time-averaged species transport equations, are all solved using a fully coupled finite-volume formulation. A flexible block-based hierarchical data structure is used to maintain the connectivity of the solution blocks in the multi-block mesh and facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. This AMR approach allows for anisotropic mesh refinement and the block-based data structure readily permits efficient and scalable implementations of the algorithm on multi-processor architectures. Numerical results for turbulent non-premixed diffusion flames, including cold- and hot-flow predictions for a bluff body burner, are described and compared to available experimental data. The numerical results demonstrate the validity and potential of the parallel AMR approach for predicting complex non-premixed turbulent combusting flows. (author)

  12. Adaptive Cross-Layer Distributed Energy-Efficient Resource Allocation Algorithms for Wireless Data Networks

    Directory of Open Access Journals (Sweden)

    Daniela Saturnino

    2008-10-01

    Full Text Available The issue of adaptive and distributed cross-layer resource allocation for energy efficiency in uplink code-division multiple-access (CDMA wireless data networks is addressed. The resource allocation problems are formulated as noncooperative games wherein each terminal seeks to maximize its own energy efficiency, namely, the number of reliably transmitted information symbols per unit of energy used for transmission. The focus of this paper is on the issue of adaptive and distributed implementation of policies arising from this approach, that is, it is assumed that only readily available measurements, such as the received data, are available at the receiver in order to play the considered games. Both single-cell and multicell networks are considered. Stochastic implementations of noncooperative games for power allocation, spreading code allocation, and choice of the uplink (linear receiver are thus proposed, and analytical results describing the convergence properties of selected stochastic algorithms are also given. Extensive simulation results show that, in many instances of practical interest, the proposed stochastic algorithms approach with satisfactory accuracy the performance of nonadaptive games, whose implementation requires much more prior information.

  13. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. Cancelier

    Full Text Available Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, the network was trained on a recurring basis and only the technique of genetic algorithms was used. In this case, only the weights and bias of the output layer neuron were modified, starting from the parameters obtained from the offline training. From the experimental results obtained in a pilot plant, a good performance was observed for the proposed control system, with superior performance for the control algorithm with online adaptation of the model, particularly with respect to the presence of off-set for the case of the fixed parameters model.

  14. Adaptive Outlier-tolerant Exponential Smoothing Prediction Algorithms with Applications to Predict the Temperature in Spacecraft

    OpenAIRE

    Hu Shaolin; Zhang Wei; Li Ye; Fan Shunxi

    2011-01-01

    The exponential smoothing prediction algorithm is widely used in spaceflight control and in process monitoring as well as in economical prediction. There are two key conundrums which are open: one is about the selective rule of the parameter in the exponential smoothing prediction, and the other is how to improve the bad influence of outliers on prediction. In this paper a new practical outlier-tolerant algorithm is built to select adaptively proper parameter, and the exponential smoothing pr...

  15. Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm

    Directory of Open Access Journals (Sweden)

    E. Parvinnia

    2014-01-01

    Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.

  16. Spatial normalization of array-CGH data

    Directory of Open Access Journals (Sweden)

    Brennetot Caroline

    2006-05-01

    Full Text Available Abstract Background Array-based comparative genomic hybridization (array-CGH is a recently developed technique for analyzing changes in DNA copy number. As in all microarray analyses, normalization is required to correct for experimental artifacts while preserving the true biological signal. We investigated various sources of systematic variation in array-CGH data and identified two distinct types of spatial effect of no biological relevance as the predominant experimental artifacts: continuous spatial gradients and local spatial bias. Local spatial bias affects a large proportion of arrays, and has not previously been considered in array-CGH experiments. Results We show that existing normalization techniques do not correct these spatial effects properly. We therefore developed an automatic method for the spatial normalization of array-CGH data. This method makes it possible to delineate and to eliminate and/or correct areas affected by spatial bias. It is based on the combination of a spatial segmentation algorithm called NEM (Neighborhood Expectation Maximization and spatial trend estimation. We defined quality criteria for array-CGH data, demonstrating significant improvements in data quality with our method for three data sets coming from two different platforms (198, 175 and 26 BAC-arrays. Conclusion We have designed an automatic algorithm for the spatial normalization of BAC CGH-array data, preventing the misinterpretation of experimental artifacts as biologically relevant outliers in the genomic profile. This algorithm is implemented in the R package MANOR (Micro-Array NORmalization, which is described at http://bioinfo.curie.fr/projects/manor and available from the Bioconductor site http://www.bioconductor.org. It can also be tested on the CAPweb bioinformatics platform at http://bioinfo.curie.fr/CAPweb.

  17. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  18. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  19. Sparse Adaptive Channel Estimation Based on lp-Norm-Penalized Affine Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Yingsong Li

    2014-01-01

    Full Text Available We propose an lp-norm-penalized affine projection algorithm (LP-APA for broadband multipath adaptive channel estimations. The proposed LP-APA is realized by incorporating an lp-norm into the cost function of the conventional affine projection algorithm (APA to exploit the sparsity property of the broadband wireless multipath channel, by which the convergence speed and steady-state performance of the APA are significantly improved. The implementation of the LP-APA is equivalent to adding a zero attractor to its iterations. The simulation results, which are obtained from a sparse channel estimation, demonstrate that the proposed LP-APA can efficiently improve channel estimation performance in terms of both the convergence speed and steady-state performance when the channel is exactly sparse.

  20. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  1. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    Science.gov (United States)

    Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting

    2010-12-01

    We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  2. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lifang

    2010-01-01

    Full Text Available We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  3. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    OpenAIRE

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  4. An Adaptive Tuning Mechanism for Phase-Locked Loop Algorithms for Faster Time Performance of Interconnected Renewable Energy Sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2015-01-01

    Interconnected renewable energy sources (RES) require fast and accurate fault ride through (FRT) operation, in order to support the power grid, when faults occur. This paper proposes an adaptive phase-locked loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response...

  5. Adaptive Backoff Algorithm for Contention Window for Dense IEEE 802.11 WLANs

    Directory of Open Access Journals (Sweden)

    Ikram Syed

    2016-01-01

    Full Text Available The performance improvement in IEEE 802.11 WLANs in widely fluctuating network loads is a challenging task. To improve the performance in this saturated state, we develop an adaptive backoff algorithm that maximizes the system throughput, reduces the collision probability, and maintains a high fairness for the IEEE 802.11 DCF under dense network conditions. In this paper, we present two main advantages of the proposed ABA-CW algorithm. First, it estimates the number of active stations and then calculates an optimal contention window based on the active station number. Each station calculates the channel state probabilities by observing the channel for the total backoff period. Based on these channel states probabilities, each station can estimate the number of active stations in the network, after which it calculates the optimal CW utilizing the estimated active number of stations. To evaluate the proposed mechanism, we derive an analytical model to determine the network performance. From our results, the proposed ABA-CW mechanism achieved better system performance compared to fixed-CW (BEB, EIED, LILD, and SETL and adaptive-CW (AMOCW, Idle Sense mechanisms. The simulation results confirmed the outstanding performance of the proposed mechanism in that it led to a lower collision probability, higher throughput, and high fairness.

  6. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  7. Analysis and Design of a Context Adaptable SAD/MSE Architecture

    Directory of Open Access Journals (Sweden)

    Arvind Sudarsanam

    2009-01-01

    Full Text Available Design of flexible multimedia accelerators that can cater to multiple algorithms is being aggressively pursued in the media processors community. Such an approach is justified in the era of sub-45 nm technology where an increasingly dominating leakage power component is forcing designers to make the best possible use of on-chip resources. In this paper we present an analysis of two commonly used window-based operations (sum of absolute differences and mean squared error across a variety of search patterns and block sizes (2×3, 5×5, etc.. We propose a context adaptable architecture that has (i configurable 2D systolic array and (ii 2D Configurable Register Array (CRA. CRA can cater to variable pixel access patterns while reusing fetched pixels across search windows. Benefits of proposed architecture when compared to 15 other published architectures are adaptability, high throughput, and low latency at a cost of increased footprint, when ported on a Xilinx FPGA.

  8. Selecting Sums in Arrays

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2008-01-01

    In an array of n numbers each of the \\binomn2+nUnknown control sequence '\\binom' contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting k subarrays inducing the k largest...... sums among all subarrays of length at least l and at most u. For this problem we design an optimal O(n + k) time algorithm. Secondly, we consider the problem of selecting a subarray storing the k’th largest sum. For this problem we prove a time bound of Θ(n · max {1,log(k/n)}) by describing...... an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an O(n· max {1,log(k/n)}) time algorithm that selects a subarray storing the k’th largest sum among all subarrays of length at least l and at most u....

  9. Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.

    Science.gov (United States)

    Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing

    2018-05-01

    It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.

  10. A local adaptive algorithm for emerging scale-free hierarchical networks

    International Nuclear Information System (INIS)

    Gomez Portillo, I J; Gleiser, P M

    2010-01-01

    In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.

  11. Optimization of IBF parameters based on adaptive tool-path algorithm

    Science.gov (United States)

    Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi

    2018-03-01

    As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.

  12. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  13. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  14. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  15. Adaptive algorithms for a self-shielding wavelet-based Galerkin method

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.

    2009-01-01

    The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)

  16. On e-business strategy planning and performance evaluation: An adaptive algorithmic managerial approach

    Directory of Open Access Journals (Sweden)

    Alexandra Lipitakis

    2017-07-01

    Full Text Available A new e-business strategy planning and performance evaluation scheme based on adaptive algorithmic modelling techniques is presented. The effect of financial and non-financial performance of organizations on e-business strategy planning is investigated. The relationships between the four strategic planning parameters are examined, the directions of these relationships are given and six additional basic components are also considered. The new conceptual model has been constructed for e-business strategic planning and performance evaluation and an adaptive algorithmic modelling approach is presented. The new adaptive algorithmic modelling scheme including eleven dynamic modules, can be optimized and used effectively in e-business strategic planning and strategic planning evaluation of various e-services in very large organizations and businesses. A synoptic statistical analysis and comparative numerical results for the case of UK and Greece are given. The proposed e-business models indicate how e-business strategic planning may affect financial and non-financial performance in business and organizations by exploring whether models which are used for strategy planning can be applied to e-business planning and whether these models would be valid in different environments. A conceptual model has been constructed and qualitative research methods have been used for testing a predetermined number of considered hypotheses. The proposed models have been tested in the UK and Greece and the conclusions including numerical results and statistical analyses indicated existing relationships between considered dependent and independent variables. The proposed e-business models are expected to contribute to e-business strategy planning of businesses and organizations and managers should consider applying these models to their e-business strategy planning to improve their companies’ performances. This research study brings together elements of e

  17. A Novel Clinical Decision Support System Using Improved Adaptive Genetic Algorithm for the Assessment of Fetal Well-Being

    Directory of Open Access Journals (Sweden)

    Sindhu Ravindran

    2015-01-01

    Full Text Available A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG dataset through an Improved Adaptive Genetic Algorithm (IAGA and Extreme Learning Machine (ELM. IAGA employs a new scaling technique (called sigma scaling to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  18. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  19. Design and FPGA-implementation of an improved adaptive fuzzy logic controller for DC motor speed control

    Directory of Open Access Journals (Sweden)

    E.A. Ramadan

    2014-09-01

    Full Text Available This paper presents an improved adaptive fuzzy logic speed controller for a DC motor, based on field programmable gate array (FPGA hardware implementation. The developed controller includes an adaptive fuzzy logic control (AFLC algorithm, which is designed and verified with a nonlinear model of DC motor. Then, it has been synthesised, functionally verified and implemented using Xilinx Integrated Software Environment (ISE and Spartan-3E FPGA. The performance of this controller has been successfully validated with good tracking results under different operating conditions.

  20. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto; Cheung, Sai Hung

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  1. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks

    Science.gov (United States)

    Li, Yuhong

    2018-01-01

    In this paper, we propose a novel algorithm—parallel adaptive quantum genetic algorithm—which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes. PMID:29554140

  2. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  3. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  4. PHASED ARRAY FEED CALIBRATION, BEAMFORMING, AND IMAGING

    International Nuclear Information System (INIS)

    Landon, Jonathan; Elmer, Michael; Waldron, Jacob; Jones, David; Stemmons, Alan; Jeffs, Brian D.; Warnick, Karl F.; Richard Fisher, J.; Norrod, Roger D.

    2010-01-01

    Phased array feeds (PAFs) for reflector antennas offer the potential for increased reflector field of view and faster survey speeds. To address some of the development challenges that remain for scientifically useful PAFs, including calibration and beamforming algorithms, sensitivity optimization, and demonstration of wide field of view imaging, we report experimental results from a 19 element room temperature L-band PAF mounted on the Green Bank 20 Meter Telescope. Formed beams achieved an aperture efficiency of 69% and a system noise temperature of 66 K. Radio camera images of several sky regions are presented. We investigate the noise performance and sensitivity of the system as a function of elevation angle with statistically optimal beamforming and demonstrate cancelation of radio frequency interference sources with adaptive spatial filtering.

  5. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2015-10-01

    Full Text Available Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD and experimental investigation, a reduced order modeling (ROM framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermodynamic over a hypersonic surface indicate that: the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  6. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  7. Meta-heuristic cuckoo search algorithm for the correction of faulty array antenna

    International Nuclear Information System (INIS)

    Khan, S.U.; Qureshi, I.M.

    2015-01-01

    In this article, we introduce a CSA (Cuckoo Search Algorithm) for compensation of faulty array antenna. It is assumed that the faulty elemental location is also known. When the sensor fails, it disturbs the power pattern, owing to which its SLL (Sidelobe Level) raises and nulls are shifted from their required positions. In this approach, the CSA optimizes the weights of the active elements for the reduction of SLL and null position in the desired direction. The meta-heuristic CSA is used for the control of SLL and steering of nulls at their required positions. The CSA is based on the necessitated kids bloodsucking behavior of cuckoo sort in arrangement with the Levy flight manners. The fitness function is used to reduce the error between the preferred and probable pattern along with null constraints. Imitational consequences for various scenarios are given to exhibit the validity and presentation of the proposed method. (author)

  8. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  9. An adaptive compensation algorithm for temperature drift of micro-electro-mechanical systems gyroscopes using a strong tracking Kalman filter.

    Science.gov (United States)

    Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan

    2015-05-13

    We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  10. An Adaptive Compensation Algorithm for Temperature Drift of Micro-Electro-Mechanical Systems Gyroscopes Using a Strong Tracking Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yibo Feng

    2015-05-01

    Full Text Available We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF, the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to −2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  11. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  12. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  13. pSum-SaDE: A Modified p-Median Problem and Self-Adaptive Differential Evolution Algorithm for Text Summarization

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliev

    2011-01-01

    Full Text Available Extractive multidocument summarization is modeled as a modified p-median problem. The problem is formulated with taking into account four basic requirements, namely, relevance, information coverage, diversity, and length limit that should satisfy summaries. To solve the optimization problem a self-adaptive differential evolution algorithm is created. Differential evolution has been proven to be an efficient and robust algorithm for many real optimization problems. However, it still may converge toward local optimum solutions, need to manually adjust the parameters, and finding the best values for the control parameters is a consuming task. In the paper is proposed a self-adaptive scaling factor in original DE to increase the exploration and exploitation ability. This paper has found that self-adaptive differential evolution can efficiently find the best solution in comparison with the canonical differential evolution. We implemented our model on multi-document summarization task. Experiments have shown that the proposed model is competitive on the DUC2006 dataset.

  14. Adaptive twisting sliding mode algorithm for hypersonic reentry vehicle attitude control based on finite-time observer.

    Science.gov (United States)

    Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun

    2018-06-01

    This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Adaptive Algorithm For Identification Of The Environment Parameters In Contact Tasks

    Energy Technology Data Exchange (ETDEWEB)

    Tuneski, Atanasko; Babunski, Darko [Faculty of Mechanical Engineering, ' St. Cyril and Methodius' University, Skopje (Macedonia, The Former Yugoslav Republic of)

    2003-07-01

    An adaptive algorithm for identification of the unknown parameters of the dynamic environment in contact tasks is proposed in this paper using the augmented least square estimation method. An approximate environment digital simulator for the continuous environment dynamics is derived, i.e. a discrete transfer function which has the approximately the same characteristics as the continuous environment dynamics is found. For solving this task a method named hold equivalence is used. The general model of the environment dynamics is given and the case when the environment dynamics is represented by second order models with parameter uncertainties is considered. (Author)

  16. Adaptive Algorithm For Identification Of The Environment Parameters In Contact Tasks

    International Nuclear Information System (INIS)

    Tuneski, Atanasko; Babunski, Darko

    2003-01-01

    An adaptive algorithm for identification of the unknown parameters of the dynamic environment in contact tasks is proposed in this paper using the augmented least square estimation method. An approximate environment digital simulator for the continuous environment dynamics is derived, i.e. a discrete transfer function which has the approximately the same characteristics as the continuous environment dynamics is found. For solving this task a method named hold equivalence is used. The general model of the environment dynamics is given and the case when the environment dynamics is represented by second order models with parameter uncertainties is considered. (Author)

  17. The Social Relationship Based Adaptive Multi-Spray-and-Wait Routing Algorithm for Disruption Tolerant Network

    Directory of Open Access Journals (Sweden)

    Jianfeng Guan

    2017-01-01

    Full Text Available The existing spray-based routing algorithms in DTN cannot dynamically adjust the number of message copies based on actual conditions, which results in a waste of resource and a reduction of the message delivery rate. Besides, the existing spray-based routing protocols may result in blind spots or dead end problems due to the limitation of various given metrics. Therefore, this paper proposes a social relationship based adaptive multiple spray-and-wait routing algorithm (called SRAMSW which retransmits the message copies based on their residence times in the node via buffer management and selects forwarders based on the social relationship. By these means, the proposed algorithm can remove the plight of the message congestion in the buffer and improve the probability of replicas to reach their destinations. The simulation results under different scenarios show that the SRAMSW algorithm can improve the message delivery rate and reduce the messages’ dwell time in the cache and further improve the buffer effectively.

  18. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    Science.gov (United States)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  19. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    International Nuclear Information System (INIS)

    Treiber, O; Wanninger, F; Fuehr, H; Panzer, W; Regulla, D; Winkler, G

    2003-01-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography

  20. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-01-01

    Highlights: ► We present an adaptive thresholding algorithm to segment oil spills. ► The segmentation algorithm is based on SAR images and wind field estimations. ► A Database of oil spill confirmations was used for the development of the algorithm. ► Wind field estimations have demonstrated to be useful for filtering look-alikes. ► Parallel programming has been successfully used to minimize processing time. - Abstract: Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean’s surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  1. Optimal shortening of uniform covering arrays.

    Directory of Open Access Journals (Sweden)

    Jose Torres-Jimenez

    Full Text Available Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v, is an N × k array over [Formula: see text] with the property that every N × t sub-array covers all t-tuples of [Formula: see text] at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N - δ × (k - Δ such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a to produce smaller covering arrays from larger ones and (b to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays.

  2. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  3. Acoustic array systems theory, implementation, and application

    CERN Document Server

    Bai, Mingsian R; Benesty, Jacob

    2013-01-01

    Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application. Acoustic Array Systems: Theory, Implementation, and Application provides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applic

  4. Comparisons of receive array interference reduction techniques under erroneous generalized transmit beamforming

    KAUST Repository

    Radaydeh, Redha Mahmoud

    2014-02-01

    This paper studies generalized single-stream transmit beamforming employing receive array co-channel interference reduction algorithms under slow and flat fading multiuser wireless systems. The impact of imperfect prediction of channel state information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements is considered, wherein it can be configured to specified interfering sources. Both dominant interference reduction and adaptive interference reduction techniques for statistically ordered and unordered interferers powers, respectively, are thoroughly studied. The effect of outdated statistical ordering of the interferers powers on the efficiency of dominant interference reduction is studied and then compared against the adaptive interference reduction. For the system models described above, new analytical formulations for the statistics of combined signal-to-interference-plus-noise ratio are presented, from which results for conventional maximum ratio transmission and single-antenna best transmit selection can be directly deduced as limiting cases. These results are then utilized to obtain quantitative measures for various performance metrics. They are also used to compare the achieved performance of various configuration models under consideration. © 1972-2012 IEEE.

  5. Image-reconstruction algorithms for positron-emission tomography systems

    International Nuclear Information System (INIS)

    Cheng, S.N.C.

    1982-01-01

    The positional uncertainty in the time-of-flight measurement of a positron-emission tomography system is modelled as a Gaussian distributed random variable and the image is assumed to be piecewise constant on a rectilinear lattice. A reconstruction algorithm using maximum-likelihood estimation is derived for the situation in which time-of-flight data are sorted as the most-likely-position array. The algorithm is formulated as a linear system described by a nonseparable, block-banded, Toeplitz matrix, and a sine-transform technique is used to implement this algorithm efficiently. The reconstruction algorithms for both the most-likely-position array and the confidence-weighted array are described by similar equations, hence similar linear systems can be used to described the reconstruction algorithm for a discrete, confidence-weighted array, when the matrix and the entries in the data array are properly identified. It is found that the mean square-error depends on the ratio of the full width at half the maximum of time-of-flight measurement over the size of a pixel. When other parameters are fixed, the larger the pixel size, the smaller is the mean square-error. In the study of resolution, parameters that affect the impulse response of time-of-flight reconstruction algorithms are identified. It is found that the larger the pixel size, the larger is the standard deviation of the impulse response. This shows that small mean square-error and fine resolution are two contradictory requirements

  6. An adaptive Phase-Locked Loop algorithm for faster fault ride through performance of interconnected renewable energy sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2013-01-01

    Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...

  7. An Adaptive Multi-Objective Particle Swarm Optimization Algorithm for Multi-Robot Path Planning

    Directory of Open Access Journals (Sweden)

    Nizar Hadi Abbas

    2016-07-01

    Full Text Available This paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In order to evaluate the proposed algorithm in term of finding the best solution, six benchmark test functions are used to make a comparison between AMOPSO and the standard MOPSO. The results show that the AMOPSO has a better ability to get away from local optimums with a quickest convergence than the MOPSO. The simulation results using Matlab 2014a, indicate that this methodology is extremely valuable for every robot in multi-robot framework to discover its own particular proper pa‌th from the start to the destination position with minimum distance and time.

  8. Analytic and Unambiguous Phase-Based Algorithm for 3-D Localization of a Single Source with Uniform Circular Array

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-02-01

    Full Text Available This paper presents an analytic algorithm for estimating three-dimensional (3-D localization of a single source with uniform circular array (UCA interferometers. Fourier transforms are exploited to expand the phase distribution of a single source and the localization problem is reformulated as an equivalent spectrum manipulation problem. The 3-D parameters are decoupled to different spectrums in the Fourier domain. Algebraic relations are established between the 3-D localization parameters and the Fourier spectrums. Fourier sampling theorem ensures that the minimum element number for 3-D localization of a single source with a UCA is five. Accuracy analysis provides mathematical insights into the 3-D localization algorithm that larger number of elements gives higher estimation accuracy. In addition, the phase-based high-order difference invariance (HODI property of a UCA is found and exploited to realize phase range compression. Following phase range compression, ambiguity resolution is addressed by the HODI of a UCA. A major advantage of the algorithm is that the ambiguity resolution and 3-D localization estimation are both analytic and are processed simultaneously, hence computationally efficient. Numerical simulations and experimental results are provided to verify the effectiveness of the proposed 3-D localization algorithm.

  9. Sparse Array Angle Estimation Using Reduced-Dimension ESPRIT-MUSIC in MIMO Radar

    Directory of Open Access Journals (Sweden)

    Chaozhu Zhang

    2013-01-01

    Full Text Available Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.

  10. Sparse array angle estimation using reduced-dimension ESPRIT-MUSIC in MIMO radar.

    Science.gov (United States)

    Zhang, Chaozhu; Pang, Yucai

    2013-01-01

    Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC) is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.

  11. A Combined Methodology of Adaptive Neuro-Fuzzy Inference System and Genetic Algorithm for Short-term Energy Forecasting

    Directory of Open Access Journals (Sweden)

    KAMPOUROPOULOS, K.

    2014-02-01

    Full Text Available This document presents an energy forecast methodology using Adaptive Neuro-Fuzzy Inference System (ANFIS and Genetic Algorithms (GA. The GA has been used for the selection of the training inputs of the ANFIS in order to minimize the training result error. The presented algorithm has been installed and it is being operating in an automotive manufacturing plant. It periodically communicates with the plant to obtain new information and update the database in order to improve its training results. Finally the obtained results of the algorithm are used in order to provide a short-term load forecasting for the different modeled consumption processes.

  12. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  13. Verification of the coupled space-angle adaptivity algorithm for the finite element-spherical harmonics method via the method of manufactured solutions

    International Nuclear Information System (INIS)

    Park, H.; De Oliveira, C. R. E.

    2007-01-01

    This paper describes the verification of the recently developed space-angle self-adaptive algorithm for the finite element-spherical harmonics method via the Method of Manufactured Solutions. This method provides a simple, yet robust way for verifying the theoretical properties of the adaptive algorithm and interfaces very well with the underlying second-order, even-parity transport formulation. Simple analytic solutions in both spatial and angular variables are manufactured to assess the theoretical performance of the a posteriori error estimates. The numerical results confirm reliability of the developed space-angle error indicators. (authors)

  14. Nonlinear PI Control with Adaptive Interaction Algorithm for Multivariable Wastewater Treatment Process

    Directory of Open Access Journals (Sweden)

    S. I. Samsudin

    2014-01-01

    Full Text Available The wastewater treatment plant (WWTP is highly known with the nonlinearity of the control parameters, thus it is difficult to be controlled. In this paper, the enhancement of nonlinear PI controller (ENon-PI to compensate the nonlinearity of the activated sludge WWTP is proposed. The ENon-PI controller is designed by cascading a sector-bounded nonlinear gain to linear PI controller. The rate variation of the nonlinear gain kn is automatically updated based on adaptive interaction algorithm. Initiative to simplify the ENon-PI control structure by adapting kn has been proved by significant improvement under various dynamic influents. More than 30% of integral square error and 14% of integral absolute error are reduced compared to benchmark PI for DO control and nitrate in nitrogen removal control. Better average effluent qualities, less number of effluent violations, and lower aeration energy consumption resulted.

  15. Cache-Aware and Cache-Oblivious Adaptive Sorting

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel

    2005-01-01

    Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms....

  16. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer.

    Science.gov (United States)

    Miao, Zhidong; Liu, Dake; Gong, Chen

    2017-08-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power.

  18. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer

    Science.gov (United States)

    Miao, Zhidong; Liu, Dake

    2017-01-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power. PMID:28763011

  19. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    Science.gov (United States)

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  20. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    Directory of Open Access Journals (Sweden)

    Bruno Srbinovski

    2016-03-01

    Full Text Available Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind. Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources and power hungry sensors (ultrasonic wind sensor and gas sensors. The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  1. Design of application specific long period waveguide grating filters using adaptive particle swarm optimization algorithms

    International Nuclear Information System (INIS)

    Semwal, Girish; Rastogi, Vipul

    2014-01-01

    We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra. (paper)

  2. Experimental verification of preset time count rate meters based on adaptive digital signal processing algorithms

    Directory of Open Access Journals (Sweden)

    Žigić Aleksandar D.

    2005-01-01

    Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.

  3. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  4. A Readout Integrated Circuit (ROIC) employing self-adaptive background current compensation technique for Infrared Focal Plane Array (IRFPA)

    Science.gov (United States)

    Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan

    2018-05-01

    A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.

  5. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  6. A novel grooming algorithm with the adaptive weight and load balancing for dynamic holding-time-aware traffic in optical networks

    Science.gov (United States)

    Xu, Zhanqi; Huang, Jiangjiang; Zhou, Zhiqiang; Ding, Zhe; Ma, Tao; Wang, Junping

    2013-10-01

    To maximize the resource utilization of optical networks, the dynamic traffic grooming, which could efficiently multiplex many low-speed services arriving dynamically onto high-capacity optical channels, has been studied extensively and used widely. However, the link weights in the existing research works can be improved since they do not adapt to the network status and load well. By exploiting the information on the holding times of the preexisting and new lightpaths, and the requested bandwidth of a user service, this paper proposes a grooming algorithm using Adaptively Weighted Links for Holding-Time-Aware (HTA) (abbreviated as AWL-HTA) traffic, especially in the setup process of new lightpath(s). Therefore, the proposed algorithm can not only establish a lightpath that uses network resource efficiently, but also achieve load balancing. In this paper, the key issues on the link weight assignment and procedure within the AWL-HTA are addressed in detail. Comprehensive simulation and experimental results show that the proposed algorithm has a much lower blocking ratio and latency than other existing algorithms.

  7. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    OpenAIRE

    Cancelier, A.; Claumann, C. A.; Bolzan, A.; Machado, R. A. F.

    2016-01-01

    Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, ...

  8. Modeling for deformable mirrors and the adaptive optics optimization program

    International Nuclear Information System (INIS)

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-01-01

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language

  9. Adaptive enhancement of learning protocol in hippocampal cultured networks grown on multielectrode arrays

    Science.gov (United States)

    Pimashkin, Alexey; Gladkov, Arseniy; Mukhina, Irina; Kazantsev, Victor

    2013-01-01

    Learning in neuronal networks can be investigated using dissociated cultures on multielectrode arrays supplied with appropriate closed-loop stimulation. It was shown in previous studies that weakly respondent neurons on the electrodes can be trained to increase their evoked spiking rate within a predefined time window after the stimulus. Such neurons can be associated with weak synaptic connections in nearby culture network. The stimulation leads to the increase in the connectivity and in the response. However, it was not possible to perform the learning protocol for the neurons on electrodes with relatively strong synaptic inputs and responding at higher rates. We proposed an adaptive closed-loop stimulation protocol capable to achieve learning even for the highly respondent electrodes. It means that the culture network can reorganize appropriately its synaptic connectivity to generate a desired response. We introduced an adaptive reinforcement condition accounting for the response variability in control stimulation. It significantly enhanced the learning protocol to a large number of responding electrodes independently on its base response level. We also found that learning effect preserved after 4–6 h after training. PMID:23745105

  10. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  11. Applying a New Adaptive Genetic Algorithm to Study the Layout of Drilling Equipment in Semisubmersible Drilling Platforms

    Directory of Open Access Journals (Sweden)

    Wensheng Xiao

    2015-01-01

    Full Text Available This study proposes a new selection method called trisection population for genetic algorithm selection operations. In this new algorithm, the highest fitness of 2N/3 parent individuals is genetically manipulated to reproduce offspring. This selection method ensures a high rate of effective population evolution and overcomes the tendency of population to fall into local optimal solutions. Rastrigin’s test function was selected to verify the superiority of the method. Based on characteristics of arc tangent function, a genetic algorithm crossover and mutation probability adaptive methods were proposed. This allows individuals close to the average fitness to be operated with a greater probability of crossover and mutation, while individuals close to the maximum fitness are not easily destroyed. This study also analyzed the equipment layout constraints and objective functions of deep-water semisubmersible drilling platforms. The improved genetic algorithm was used to solve the layout plan. Optimization results demonstrate the effectiveness of the improved algorithm and the fit of layout plans.

  12. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  13. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA, artificial colony optimization (ACO, and particle swarm optimization (PSO. However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  14. MATLAB tensor classes for fast algorithm prototyping.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  15. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  16. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    Science.gov (United States)

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2017-10-01

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  17. Adaptive lesion formation using dual mode ultrasound array system

    Science.gov (United States)

    Liu, Dalong; Casper, Andrew; Haritonova, Alyona; Ebbini, Emad S.

    2017-03-01

    We present the results from an ultrasound-guided focused ultrasound platform designed to perform real-time monitoring and control of lesion formation. Real-time signal processing of echogenicity changes during lesion formation allows for identification of signature events indicative of tissue damage. The detection of these events triggers the cessation or the reduction of the exposure (intensity and/or time) to prevent overexposure. A dual mode ultrasound array (DMUA) is used for forming single- and multiple-focus patterns in a variety of tissues. The DMUA approach allows for inherent registration between the therapeutic and imaging coordinate systems providing instantaneous, spatially-accurate feedback on lesion formation dynamics. The beamformed RF data has been shown to have high sensitivity and specificity to tissue changes during lesion formation, including in vivo. In particular, the beamformed echo data from the DMUA is very sensitive to cavitation activity in response to HIFU in a variety of modes, e.g. boiling cavitation. This form of feedback is characterized by sudden increase in echogenicity that could occur within milliseconds of the application of HIFU (see http://youtu.be/No2wh-ceTLs for an example). The real-time beamforming and signal processing allowing the adaptive control of lesion formation is enabled by a high performance GPU platform (response time within 10 msec). We present results from a series of experiments in bovine cardiac tissue demonstrating the robustness and increased speed of volumetric lesion formation for a range of clinically-relevant exposures. Gross histology demonstrate clearly that adaptive lesion formation results in tissue damage consistent with the size of the focal spot and the raster scan in 3 dimensions. In contrast, uncontrolled volumetric lesions exhibit significant pre-focal buildup due to excessive exposure from multiple full-exposure HIFU shots. Stopping or reducing the HIFU exposure upon the detection of such an

  18. Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm. (On-Line Harmonics Estimation Application

    Directory of Open Access Journals (Sweden)

    Eyad K Almaita

    2017-03-01

    Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application.  International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17

  19. LC-lens array with light field algorithm for 3D biomedical applications

    Science.gov (United States)

    Huang, Yi-Pai; Hsieh, Po-Yuan; Hassanfiroozi, Amir; Martinez, Manuel; Javidi, Bahram; Chu, Chao-Yu; Hsuan, Yun; Chu, Wen-Chun

    2016-03-01

    In this paper, liquid crystal lens (LC-lens) array was utilized in 3D bio-medical applications including 3D endoscope and light field microscope. Comparing with conventional plastic lens array, which was usually placed in 3D endoscope or light field microscope system to record image disparity, our LC-lens array has higher flexibility of electrically changing its focal length. By using LC-lens array, the working distance and image quality of 3D endoscope and microscope could be enhanced. Furthermore, the 2D/3D switching ability could be achieved if we turn off/on the electrical power on LClens array. In 3D endoscope case, a hexagonal micro LC-lens array with 350um diameter was placed at the front end of a 1mm diameter endoscope. With applying electric field on LC-lens array, the 3D specimen would be recorded as from seven micro-cameras with different disparity. We could calculate 3D construction of specimen with those micro images. In the other hand, if we turn off the electric field on LC-lens array, the conventional high resolution 2D endoscope image would be recorded. In light field microscope case, the LC-lens array was placed in front of the CMOS sensor. The main purpose of LC-lens array is to extend the refocusing distance of light field microscope, which is usually very narrow in focused light field microscope system, by montaging many light field images sequentially focusing on different depth. With adjusting focal length of LC-lens array from 2.4mm to 2.9mm, the refocusing distance was extended from 1mm to 11.3mm. Moreover, we could use a LC wedge to electrically shift the optics axis and increase the resolution of light field.

  20. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    Science.gov (United States)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  1. Noise Reduction with Microphone Arrays for Speaker Identification

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Z

    2011-12-22

    Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identification algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?

  2. Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2013-12-01

    Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.

  3. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H; von Schwerin, E; Szepessy, A; Tempone, Raul

    2011-01-01

    . An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates

  4. Diagnostics of the BIOMASS feed array prototype

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey; Pontoppidan, Kennie Nybo

    2013-01-01

    The 3D reconstruction algorithm is applied to the prototype feed array of the BIOMASS synthetic aperture radar, recently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility in Denmark. Careful analysis of the measured feed array data has shown that the test support structure...

  5. Adaptive DFT-Based Interferometer Fringe Tracking

    Science.gov (United States)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2005-12-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately [InlineEquation not available: see fulltext.] milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  6. Adaptive DFT-Based Interferometer Fringe Tracking

    Directory of Open Access Journals (Sweden)

    Wesley A. Traub

    2005-09-01

    Full Text Available An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms, using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  7. An Adaptive Method for Switching between Pedestrian/Car Indoor Positioning Algorithms based on Multilayer Time Sequences

    Directory of Open Access Journals (Sweden)

    Zhining Gu

    2018-02-01

    Full Text Available Pedestrian dead reckoning (PDR positioning algorithms can be used to obtain a target’s location only for movement with step features and not for driving, for which the trilateral Bluetooth indoor positioning method can be used. In this study, to obtain the precise locations of different states (pedestrian/car using the corresponding positioning algorithms, we propose an adaptive method for switching between the PDR and car indoor positioning algorithms based on multilayer time sequences (MTSs. MTSs, which consider the behavior context, comprise two main aspects: filtering of noisy data in small-scale time sequences and using a state chain to reduce the time delay of algorithm switching in large-scale time sequences. The proposed method can be expected to realize the recognition of stationary, walking, driving, or other states; switch to the correct indoor positioning algorithm; and improve the accuracy of localization compared to using a single positioning algorithm. Our experiments show that the recognition of static, walking, driving, and other states improves by 5.5%, 45.47%, 26.23%, and 21% on average, respectively, compared with convolutional neural network (CNN method. The time delay decreases by approximately 0.5–8.5 s for the transition between states and by approximately 24 s for the entire process.

  8. An Adaptive Method for Switching between Pedestrian/Car Indoor Positioning Algorithms based on Multilayer Time Sequences.

    Science.gov (United States)

    Gu, Zhining; Guo, Wei; Li, Chaoyang; Zhu, Xinyan; Guo, Tao

    2018-02-27

    Pedestrian dead reckoning (PDR) positioning algorithms can be used to obtain a target's location only for movement with step features and not for driving, for which the trilateral Bluetooth indoor positioning method can be used. In this study, to obtain the precise locations of different states (pedestrian/car) using the corresponding positioning algorithms, we propose an adaptive method for switching between the PDR and car indoor positioning algorithms based on multilayer time sequences (MTSs). MTSs, which consider the behavior context, comprise two main aspects: filtering of noisy data in small-scale time sequences and using a state chain to reduce the time delay of algorithm switching in large-scale time sequences. The proposed method can be expected to realize the recognition of stationary, walking, driving, or other states; switch to the correct indoor positioning algorithm; and improve the accuracy of localization compared to using a single positioning algorithm. Our experiments show that the recognition of static, walking, driving, and other states improves by 5.5%, 45.47%, 26.23%, and 21% on average, respectively, compared with convolutional neural network (CNN) method. The time delay decreases by approximately 0.5-8.5 s for the transition between states and by approximately 24 s for the entire process.

  9. Implementation of a real-time adaptive digital shaping for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Regadío, Alberto, E-mail: aregadio@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain); Sánchez-Prieto, Sebastián, E-mail: ssanchez@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Prieto, Manuel, E-mail: mprieto@srg.aut.uah.es [Department of Computer Engineering, Space Research Group, Universidad de Alcalá, 28805 Alcalá de Henares (Spain); Tabero, Jesús, E-mail: taberogj@inta.es [Electronic Technology Area, Instituto Nacional de Técnica Aeroespacial, 28850 Torrejón de Ardoz (Spain)

    2014-01-21

    This paper presents the structure, design and implementation of a new adaptive digital shaper for processing the pulses generated in nuclear particle detectors. The proposed adaptive algorithm has the capacity to automatically adjust the coefficients for shaping an input signal with a desired profile in real-time. Typical shapers such as triangular, trapezoidal or cusp-like ones can be generated, but more exotic unipolar shaping could also be performed. A practical prototype was designed, implemented and tested in a Field Programmable Gate Array (FPGA). Particular attention was paid to the amount of internal FPGA resources required and to the sampling rate, making the design as simple as possible in order to minimize power consumption. Lastly, its performance and capabilities were measured using simulations and a real benchmark.

  10. Implementation of a real-time adaptive digital shaping for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Regadío, Alberto; Sánchez-Prieto, Sebastián; Prieto, Manuel; Tabero, Jesús

    2014-01-01

    This paper presents the structure, design and implementation of a new adaptive digital shaper for processing the pulses generated in nuclear particle detectors. The proposed adaptive algorithm has the capacity to automatically adjust the coefficients for shaping an input signal with a desired profile in real-time. Typical shapers such as triangular, trapezoidal or cusp-like ones can be generated, but more exotic unipolar shaping could also be performed. A practical prototype was designed, implemented and tested in a Field Programmable Gate Array (FPGA). Particular attention was paid to the amount of internal FPGA resources required and to the sampling rate, making the design as simple as possible in order to minimize power consumption. Lastly, its performance and capabilities were measured using simulations and a real benchmark

  11. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    Science.gov (United States)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We

  12. A comparative evaluation of adaptive noise cancellation algorithms for minimizing motion artifacts in a forehead-mounted wearable pulse oximeter.

    Science.gov (United States)

    Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush

    2007-01-01

    Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.

  13. A closer look at adaptive regret

    NARCIS (Netherlands)

    D. Adamskyi (Dmitri); W.M. Koolen-Wijkstra (Wouter); A. Chernov (Alexey); V. Vovk

    2016-01-01

    textabstractFor the prediction with expert advice setting, we consider methods to construct algorithms that have low adaptive regret. The adaptive regret of an algorithm on a time interval [t1,t2] is the loss of the algorithm minus the loss of the best expert over that interval. Adaptive regret

  14. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  15. VLSI implementation of MIMO detection for 802.11n using a novel adaptive tree search algorithm

    International Nuclear Information System (INIS)

    Yao Heng; Jian Haifang; Zhou Liguo; Shi Yin

    2013-01-01

    A 4×4 64-QAM multiple-input multiple-output (MIMO) detector is presented for the application of an IEEE 802.11n wireless local area network. The detectoris the implementation of a novel adaptive tree search(ATS) algorithm, and multiple ATS cores need to be instantiated to achieve the wideband requirement in the 802.11n standard. Both the ATS algorithm and the architectural considerations are explained. The latency of the detector is 0.75 μs, and the detector has a gate count of 848 k with a total of 19 parallel ATS cores. Each ATS core runs at 67 MHz. Measurement results show that compared with the floating-point ATS algorithm, the fixed-point implementation achieves a loss of 0.9 dB at a BER of 10 −3 . (semiconductor integrated circuits)

  16. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  17. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  18. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  19. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  20. Multifocal multiphoton microscopy with adaptive optical correction

    Science.gov (United States)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.