Algorithm FIRE-Feynman Integral REduction
International Nuclear Information System (INIS)
Smirnov, A.V.
2008-01-01
The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.
NIC symposium 2012. 25 years HLRZ/NIC. Proceedings
International Nuclear Information System (INIS)
Binder, Kurt
2012-01-01
Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B K with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for Supercomputing (GCS), an alliance of the three German national supercomputing centres in Juelich, Garching and
NIC symposium 2012. 25 years HLRZ/NIC. Proceedings
Energy Technology Data Exchange (ETDEWEB)
Binder, Kurt [Mainz Univ. (Germany). Inst. fuer Physik; Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kremer, Manfred [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)
2012-08-07
Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B{sub K} with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for Supercomputing (GCS), an alliance of the three German national supercomputing centres in Juelich, Garching
NIC symposium 2012. 25 years HLRZ/NIC. Proceedings
Energy Technology Data Exchange (ETDEWEB)
Binder, Kurt [Mainz Univ. (Germany). Inst. fuer Physik; Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kremer, Manfred (eds.) [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)
2012-08-07
Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B{sub K} with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for
NIC symposium 2010. Proceedings
Energy Technology Data Exchange (ETDEWEB)
Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Wolf, Dietrich [Duisburg-Essen Univ., Duisburg (Germany). Fakultaet fuer Physik; Kremer, Manfred (eds.) [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)
2012-06-21
The fifth NIC-Symposium gave an overview of the activities of the John von Neumann Institute for Computing (NIC) and of the results obtained in the last two years by research groups supported by the NIC. The large recent progress in supercomputing is highlighted by the fact that the newly installed Blue Gene/P system in Juelich - with a peak performance of 1 Petaflop/s - currently ranks number four in the TOP500 list. This development opens new dimensions in simulation science for researchers in Germany and Europe. NIC - a joint foundation of Forschungszentrum Juelich, Deutsches Elektronen-Synchrotron (DESY) and Gesellschaft fuer Schwerionenforschung (GSI) - supports with its members' supercomputer facilities about 130 research groups at universities and national labs working on computer simulations in various fields of science. Fifteen invited lectures covered selected topics in the following fields: Astrophysics Biophysics Chemistry Elementary Particle Physics Condensed Matter Materials Science Soft Matter Science Environmental Research Hydrodynamics and turbulence Plasma Physics Computer Science The talks are intended to inform a broad audience of scientists and the interested public about the research activities at NIC. The proceedings of the symposium cover projects that have been supported by the IBM supercomputers JUMP and IBM Blue Gene/P in Juelich and the APE topical computer at DESY-Zeuthen in an even wider range than the lectures.
NIC symposium 2010. Proceedings
Energy Technology Data Exchange (ETDEWEB)
Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Wolf, Dietrich [Duisburg-Essen Univ., Duisburg (Germany). Fakultaet fuer Physik; Kremer, Manfred [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)
2012-06-21
The fifth NIC-Symposium gave an overview of the activities of the John von Neumann Institute for Computing (NIC) and of the results obtained in the last two years by research groups supported by the NIC. The large recent progress in supercomputing is highlighted by the fact that the newly installed Blue Gene/P system in Juelich - with a peak performance of 1 Petaflop/s - currently ranks number four in the TOP500 list. This development opens new dimensions in simulation science for researchers in Germany and Europe. NIC - a joint foundation of Forschungszentrum Juelich, Deutsches Elektronen-Synchrotron (DESY) and Gesellschaft fuer Schwerionenforschung (GSI) - supports with its members' supercomputer facilities about 130 research groups at universities and national labs working on computer simulations in various fields of science. Fifteen invited lectures covered selected topics in the following fields: Astrophysics Biophysics Chemistry Elementary Particle Physics Condensed Matter Materials Science Soft Matter Science Environmental Research Hydrodynamics and turbulence Plasma Physics Computer Science The talks are intended to inform a broad audience of scientists and the interested public about the research activities at NIC. The proceedings of the symposium cover projects that have been supported by the IBM supercomputers JUMP and IBM Blue Gene/P in Juelich and the APE topical computer at DESY-Zeuthen in an even wider range than the lectures.
NIC symposium 2010. Proceedings
International Nuclear Information System (INIS)
Muenster, Gernot
2012-01-01
The fifth NIC-Symposium gave an overview of the activities of the John von Neumann Institute for Computing (NIC) and of the results obtained in the last two years by research groups supported by the NIC. The large recent progress in supercomputing is highlighted by the fact that the newly installed Blue Gene/P system in Juelich - with a peak performance of 1 Petaflop/s - currently ranks number four in the TOP500 list. This development opens new dimensions in simulation science for researchers in Germany and Europe. NIC - a joint foundation of Forschungszentrum Juelich, Deutsches Elektronen-Synchrotron (DESY) and Gesellschaft fuer Schwerionenforschung (GSI) - supports with its members' supercomputer facilities about 130 research groups at universities and national labs working on computer simulations in various fields of science. Fifteen invited lectures covered selected topics in the following fields: Astrophysics Biophysics Chemistry Elementary Particle Physics Condensed Matter Materials Science Soft Matter Science Environmental Research Hydrodynamics and turbulence Plasma Physics Computer Science The talks are intended to inform a broad audience of scientists and the interested public about the research activities at NIC. The proceedings of the symposium cover projects that have been supported by the IBM supercomputers JUMP and IBM Blue Gene/P in Juelich and the APE topical computer at DESY-Zeuthen in an even wider range than the lectures.
Radial Variations of Outward and Inward Alfvénic Fluctuations Based on Ulysses Observations
Yang, L.; Lee, L. C.; Li, J. P.; Luo, Q. Y.; Kuo, C. L.; Shi, J. K.; Wu, D. J.
2017-12-01
Ulysses magnetic and plasma data are used to study hourly scale Alfvénic fluctuations in the solar polar wind. The calculated energy ratio {R}{vA}2(cal) of inward to outward Alfvén waves is obtained from the observed Walén slope through an analytical expression, and the observed {R}{vA}2(obs) is based on a direct decomposition of original Alfvénic fluctuations into outward- and inward-propagating Alfvén waves. The radial variation of {R}{vA}2(cal) shows a monotonically increasing trend with heliocentric distance r, implying the increasing local generation or contribution of inward Alfvén waves. The contribution is also shown by the radial increase in the occurrence of dominant inward fluctuations. We further pointed out a higher occurrence (˜ 83 % of a day in average) of dominant outward Alfvénic fluctuations in the solar wind than previously estimated. Since {R}{vA}2(cal) is more accurate than {R}{vA}2(obs) in the measurement of the energy ratio for dominant outward fluctuations, the values of {R}{vA}2(cal) in our results are likely more realistic in the solar wind than those previously estimated as well as {R}{vA}2(obs) in our results. The duration ratio R T of dominant inward to all Alfvénic fluctuations increases monotonically with r, and is about two or more times that from Voyager 2 observations at r≥slant 4 {au}. These results reveal new qualitative and quantitative features of Alfvénic fluctuations therein compared with previous studies and put constraints on modeling the variation of solar wind fluctuations.
An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition
Directory of Open Access Journals (Sweden)
Jun Huang
2014-01-01
Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.
FPGA based algorithms for data reduction at Belle II
Energy Technology Data Exchange (ETDEWEB)
Muenchow, David; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Liu, Ming; Spruck, Bjoern [II. Physikalisches Institut, Universitaet Giessen (Germany)
2011-07-01
Belle II, the upgrade of the existing Belle experiment at Super-KEKB in Tsukuba, Japan, is an asymmetric e{sup +}e{sup -} collider with a design luminosity of 8.10{sup 35}cm{sup -2}s{sup -1}. At Belle II the estimated event rate is {<=}30 kHz. The resulting data rate at the Pixel Detector (PXD) will be {<=}7.2 GB/s. This data rate needs to be reduced to be able to process and store the data. A region of interest (ROI) selection is based upon two mechanisms. a.) a tracklet finder using the silicon strip detector and b.) the HLT using all other Belle II subdetectors. These ROIs and the pixel data are forwarded to an FPGA based Compute Node for processing. Here a VHDL based algorithm on FPGA with the benefit of pipelining and parallelisation will be implemented. For a fast data handling we developed a dedicated memory management system for buffering and storing the data. The status of the implementation and performance tests of the memory manager and data reduction algorithm is presented.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Directory of Open Access Journals (Sweden)
Xuyun FU
2018-01-01
Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.
CacheCard : Caching static and dynamic content on the NIC
Bos, Herbert; Huang, Kaiming
2009-01-01
CacheCard is a NIC-based cache for static and dynamic web content in a way that allows for implementation on simple devices like NICs. It requires neither understanding of the way dynamic data is generated, nor execution of scripts on the cache. By monitoring file system activity and potential
Feature Reduction Based on Genetic Algorithm and Hybrid Model for Opinion Mining
Directory of Open Access Journals (Sweden)
P. Kalaivani
2015-01-01
Full Text Available With the rapid growth of websites and web form the number of product reviews is available on the sites. An opinion mining system is needed to help the people to evaluate emotions, opinions, attitude, and behavior of others, which is used to make decisions based on the user preference. In this paper, we proposed an optimized feature reduction that incorporates an ensemble method of machine learning approaches that uses information gain and genetic algorithm as feature reduction techniques. We conducted comparative study experiments on multidomain review dataset and movie review dataset in opinion mining. The effectiveness of single classifiers Naïve Bayes, logistic regression, support vector machine, and ensemble technique for opinion mining are compared on five datasets. The proposed hybrid method is evaluated and experimental results using information gain and genetic algorithm with ensemble technique perform better in terms of various measures for multidomain review and movie reviews. Classification algorithms are evaluated using McNemar’s test to compare the level of significance of the classifiers.
Passive Classification of Wireless NICs during Rate Switching
Directory of Open Access Journals (Sweden)
Cherita L. Corbett
2008-02-01
Full Text Available Computer networks have become increasingly ubiquitous. However, with the increase in networked applications, there has also been an increase in difficulty to manage and secure these networks. The proliferation of 802.11 wireless networks has heightened this problem by extending networks beyond physical boundaries. We propose the use of spectral analysis to identify the type of wireless network interface card (NIC. This mechanism can be applied to support the detection of unauthorized systems that use NICs which are different from that of a legitimate system. We focus on rate switching, a vaguely specified mechanism required by the 802.11 standard that is implemented in the hardware and software of the wireless NIC. We show that the implementation of this function influences the transmission patterns of a wireless stream, which are observable through traffic analysis. Our mechanism for NIC identification uses signal processing to analyze the periodicity embedded in the wireless traffic caused by rate switching. A stable spectral profile is created from the periodic components of the traffic and used for the identity of the wireless NIC. We show that we can distinguish between NICs manufactured by different vendors and NICs manufactured by the same vendor using their spectral profiles.
Passive Classification of Wireless NICs during Rate Switching
Directory of Open Access Journals (Sweden)
Beyah RaheemA
2008-01-01
Full Text Available Abstract Computer networks have become increasingly ubiquitous. However, with the increase in networked applications, there has also been an increase in difficulty to manage and secure these networks. The proliferation of 802.11 wireless networks has heightened this problem by extending networks beyond physical boundaries. We propose the use of spectral analysis to identify the type of wireless network interface card (NIC. This mechanism can be applied to support the detection of unauthorized systems that use NICs which are different from that of a legitimate system. We focus on rate switching, a vaguely specified mechanism required by the 802.11 standard that is implemented in the hardware and software of the wireless NIC. We show that the implementation of this function influences the transmission patterns of a wireless stream, which are observable through traffic analysis. Our mechanism for NIC identification uses signal processing to analyze the periodicity embedded in the wireless traffic caused by rate switching. A stable spectral profile is created from the periodic components of the traffic and used for the identity of the wireless NIC. We show that we can distinguish between NICs manufactured by different vendors and NICs manufactured by the same vendor using their spectral profiles.
Investigating Alfvénic wave propagation in coronal open-field regions
Morton, R. J.; Tomczyk, S.; Pinto, R.
2015-01-01
The physical mechanisms behind accelerating solar and stellar winds are a long-standing astrophysical mystery, although recent breakthroughs have come from models invoking the turbulent dissipation of Alfvén waves. The existence of Alfvén waves far from the Sun has been known since the 1970s, and recently the presence of ubiquitous Alfvénic waves throughout the solar atmosphere has been confirmed. However, the presence of atmospheric Alfvénic waves does not, alone, provide sufficient support for wave-based models; the existence of counter-propagating Alfvénic waves is crucial for the development of turbulence. Here, we demonstrate that counter-propagating Alfvénic waves exist in open coronal magnetic fields and reveal key observational insights into the details of their generation, reflection in the upper atmosphere and outward propagation into the solar wind. The results enhance our knowledge of Alfvénic wave propagation in the solar atmosphere, providing support and constraints for some of the recent Alfvén wave turbulence models. PMID:26213234
Department of Housing and Urban Development — The NSP Investment Cluster (NIC) study analyzes how markets treated with a concentration of NSP investment have changed over time compared to similar markets that...
ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM
Directory of Open Access Journals (Sweden)
Mohammed Adam Taheir Mohammed
2016-02-01
Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.
2013-12-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
Deng, Honggui; Liu, Yan; Ren, Shuang; He, Hailang; Tang, Chengying
2017-10-01
We propose an enhanced partial transmit sequence technique based on novel peak-value feedback algorithm and genetic algorithm (GAPFA-PTS) to reduce peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals in visible light communication (VLC) systems(VLC-OFDM). To demonstrate the advantages of our proposed algorithm, we analyze the flow of proposed technique and compare the performances with other techniques through MATLAB simulation. The results show that GAPFA-PTS technique achieves a significant improvement in PAPR reduction while maintaining low bit error rate (BER) and low complexity in VLC-OFDM systems.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Plant operation data collection and database management using NIC system
International Nuclear Information System (INIS)
Inase, S.
1990-01-01
The Nuclear Information Center (NIC), a division of the Central Research Institute of Electric Power Industry, collects nuclear power plant operation and maintenance information both in Japan and abroad and transmits the information to all domestic utilities so that it can be effectively utilized for safe plant operation and reliability enhancement. The collected information is entered into the database system after being key-worded by NIC. The database system, Nuclear Information database/Communication System (NICS), has been developed by NIC for storage and management of collected information. Objectives of keywords are retrieval and classification by the keyword categories
Study on the Noise Reduction of Vehicle Exhaust NOX Spectra Based on Adaptive EEMD Algorithm
Directory of Open Access Journals (Sweden)
Kai Zhang
2017-01-01
Full Text Available It becomes a key technology to measure the concentration of the vehicle exhaust components with the transmission spectra. But in the conventional methods for noise reduction and baseline correction, such as wavelet transform, derivative, interpolation, polynomial fitting, and so forth, the basic functions of these algorithms, the number of decomposition layers, and the way to reconstruct the signal have to be adjusted according to the characteristics of different components in the transmission spectra. The parameter settings of the algorithms above are not transcendental, so with them, it is difficult to achieve the best noise reduction effect for the vehicle exhaust spectra which are sharp and drastic in the waveform. In this paper, an adaptive ensemble empirical mode decomposition (EEMD denoising model based on a special normalized index optimization is proposed and used in the spectral noise reduction of vehicle exhaust NOX. It is shown with the experimental results that the method can effectively improve the accuracy of the spectral noise reduction and simplify the denoising process and its operation difficulty.
Metal artifact reduction algorithm based on model images and spatial information
Energy Technology Data Exchange (ETDEWEB)
Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)
2011-10-01
Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Directory of Open Access Journals (Sweden)
Zhongliang Deng
2018-01-01
Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.
Directory of Open Access Journals (Sweden)
Anamaria Alves Napoleão
2006-08-01
Full Text Available Este estudo objetivou revisar o conhecimento produzido sobre a Classificação das Intervenções de Enfermagem (NIC, disponível na literatura científica, no período de janeiro de 1980 a janeiro de 2004. A NIC é uma taxonomia que inclui atividades realizadas pelos enfermeiros. Foram consultadas as bases de dados Lilacs, Medline e realizado levantamento manual no Centro de Classificação em Enfermagem da Universidade de Iowa - College of Nursing, além da inclusão de uma tese obtida em acervo particular. Os trabalhos analisados referiam-se à aplicação da NIC na prática, comparação de linguagens em sistemas informatizados e uso da NIC nesses sistemas, apresentação, construção e desenvolvimento da taxonomia, validação, entre outros. Concluiu-se que várias são as possibilidades relativas à produção do conhecimento sobre a NIC no Brasil e que são necessários estudos sobre essa taxonomia que levantem questionamentos, gerem novos conhecimentos e que contribuam em mais esse aspecto relativo ao avanço da enfermagem brasileira.El objetivo de este estudio fue revisar el conocimiento producido sobre la Clasificación de Intervenciones de Enfermería (NIC/CIE disponible en la literatura científica entre enero de 1980 y enero de 2004. La NIC/CIE es una taxonomía que incluye actividades realizadas por los enfermeros. Fueron consultadas las bases de datos Lilacs, Medline y realizada una búsqueda manual en el Centro de Clasificación de Enfermería de la Universidad de Iowa - Escuela de Enfermería, y también fue incluida una tesis de un acervo particular. Los trabajos analizados estaban relacionados a la aplicación de la NIC/CIE en la práctica, comparación de lenguajes en sistemas informatizados, uso de la NIC/CIE en esos sistemas, presentación, construcción, desarrollo y validación de la taxonomía, entre otros. Los autores concluyeron que son varias las posibilidades relativas a la producción de conocimiento sobre la NIC
Energy Technology Data Exchange (ETDEWEB)
Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Can Hall effect trigger Kelvin-Helmholtz instability in sub-Alfvénic flows?
Pandey, B. P.
2018-05-01
In the Hall magnetohydrodynamics, the onset condition of the Kelvin-Helmholtz instability is solely determined by the Hall effect and is independent of the nature of shear flows. In addition, the physical mechanism behind the super- and sub-Alfvénic flows becoming unstable is quite different: the high-frequency right circularly polarized whistler becomes unstable in the super-Alfvénic flows whereas low-frequency, left circularly polarized ion-cyclotron wave becomes unstable in the presence of sub-Alfvénic shear flows. The growth rate of the Kelvin-Helmholtz instability in the super-Alfvénic case is higher than the corresponding ideal magnetohydrodynamic rate. In the sub-Alfvénic case, the Hall effect opens up a new, hitherto inaccessible (to the magnetohydrodynamics) channel through which the partially or fully ionized fluid can become Kelvin-Helmholtz unstable. The instability growth rate in this case is smaller than the super-Alfvénic case owing to the smaller free shear energy content of the flow. When the Hall term is somewhat smaller than the advection term in the induction equation, the Hall effect is also responsible for the appearance of a new overstable mode whose growth rate is smaller than the purely growing Kelvin-Helmholtz mode. On the other hand, when the Hall diffusion dominates the advection term, the growth rate of the instability depends only on the Alfvén -Mach number and is independent of the Hall diffusion coefficient. Further, the growth rate in this case linearly increases with the Alfvén frequency with smaller slope for sub-Alfvénic flows.
Strategic management of nuclear technology information through the KNGR NIC experience
International Nuclear Information System (INIS)
Kang, G. D.; Moon, C. G.
2000-01-01
The key to the success of nuclear information management depends on whether nuclear engineers focus their attention on such management or not. This paper focus to strategic management of nuclear technology information through the KNGR NIC experience. The present KEPCO systems on nuclear information are largely divided into four parts, which are Integrated NPP Information System, System by Business Process Reengineering for Construction, KNGR IMS and Document Management Systems of Plant DDCC. In order to accommodate effective system integration, it is not desirable that a system is subordinate to others because each system has its own work flows and practices site by site. Therefore for good NIS, the system should be developed by web-based, local computer system and bottom-up approach. NIC management have to focus on domain control, nuclear portal service and development of search engine
Laguda, Edcer Jerecho
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three
Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss
Zenil, Hector
2018-02-16
The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.
Ion, Bogdan; Kazim, Erum; Gauld, James
2014-01-01
Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The o...
Implementing peak load reduction algorithms for household electrical appliances
International Nuclear Information System (INIS)
Dlamini, Ndumiso G.; Cromieres, Fabien
2012-01-01
Considering household appliance automation for reduction of household peak power demand, this study explored aspects of the interaction between household automation technology and human behaviour. Given a programmable household appliance switching system, and user-reported appliance use times, we simulated the load reduction effectiveness of three types of algorithms, which were applied at both the single household level and across all 30 households. All three algorithms effected significant load reductions, while the least-to-highest potential user inconvenience ranking was: coordinating the timing of frequent intermittent loads (algorithm 2); moving period-of-day time-flexible loads to off-peak times (algorithm 1); and applying short-term time delays to avoid high peaks (algorithm 3) (least accommodating). Peak reduction was facilitated by load interruptibility, time of use flexibility and the willingness of users to forgo impulsive appliance use. We conclude that a general factor determining the ability to shift the load due to a particular appliance is the time-buffering between the service delivered and the power demand of an appliance. Time-buffering can be ‘technologically inherent’, due to human habits, or realised by managing user expectations. There are implications for the design of appliances and home automation systems. - Highlights: ► We explored the interaction between appliance automation and human behaviour. ► There is potential for considerable load shifting of household appliances. ► Load shifting for load reduction is eased with increased time buffering. ► Design, human habits and user expectations all influence time buffering. ► Certain automation and appliance design features can facilitate load shifting.
Directory of Open Access Journals (Sweden)
Ho-Lung Hung
2008-08-01
Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Directory of Open Access Journals (Sweden)
Lee Shu-Hong
2008-01-01
Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Controlador electrònic programable per vehicles de maquinària pesada
Gómez Areste, Sergi
2017-01-01
Disseny d'un controlador electrònic per maquinària pesada basat en sistemes de lògica reconfigurable. Diseño de un controlador electrónico para maquinaria pesada basado en sistemas de lógica reconfigurable. Designing an electronic controller for heavy machinery systems based on reconfigurable logic.
Yoon, Heakyung C.; Loftness, Vivian
2002-05-01
Lack of speech privacy has been reported to be the main dissatisfaction among occupants in open workplaces, according to workplace surveys. Two speech privacy measurements, Articulation Index (AI), standardized by the American National Standards Institute in 1969, and Speech Privacy Noise Isolation Class (NIC', Noise Isolation Class Prime), adapted from Noise Isolation Class (NIC) by U. S. General Services Administration (GSA) in 1979, have been claimed as objective tools to measure speech privacy in open offices. To evaluate which of them, normal privacy for AI or satisfied privacy for NIC', is a better tool in terms of speech privacy in a dynamic open office environment, measurements were taken in the field. AIs and NIC's in the different partition heights and workplace configurations have been measured following ASTM E1130 (Standard Test Method for Objective Measurement of Speech Privacy in Open Offices Using Articulation Index) and GSA test PBS-C.1 (Method for the Direct Measurement of Speech-Privacy Potential (SPP) Based on Subjective Judgments) and PBS-C.2 (Public Building Service Standard Method of Test Method for the Sufficient Verification of Speech-Privacy Potential (SPP) Based on Objective Measurements Including Methods for the Rating of Functional Interzone Attenuation and NC-Background), respectively.
NaNet: a flexible and configurable low-latency NIC for real-time trigger systems based on GPUs
International Nuclear Information System (INIS)
Ammendola, R; Biagioni, A; Frezza, O; Lonardo, A; Cicero, F Lo; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P; Lamanna, G; Pantaleo, F; Sozzi, M
2014-01-01
NaNet is an FPGA-based PCIe X8 Gen2 NIC supporting 1/10 GbE links and the custom 34 Gbps APElink channel. The design has GPUDirect RDMA capabilities and features a network stack protocol offloading module, making it suitable for building low-latency, real-time GPU-based computing systems. We provide a detailed description of the NaNet hardware modular architecture. Benchmarks for latency and bandwidth for GbE and APElink channels are presented, followed by a performance analysis on the case study of the GPU-based low level trigger for the RICH detector in the NA62 CERN experiment, using either the NaNet GbE and APElink channels. Finally, we give an outline of project future activities
NaNet: a flexible and configurable low-latency NIC for real-time trigger systems based on GPUs
INSPIRE-00646837; Biagioni, A.; Frezza, O.; Lamanna, G.; Lonardo, A.; Lo Cicero, F.; Paolucci, P.S.; Pantaleo, F.; Rossetti, D.; Simula, F.; Sozzi, M.; Tosoratto, L.; Vicini, P.
2014-02-21
NaNet is an FPGA-based PCIe X8 Gen2 NIC supporting 1/10 GbE links and the custom 34~Gbps APElink channel. The design has GPUDirect RDMA capabilities and features a network stack protocol offloading module, making it suitable for building low-latency, real-time GPU-based computing systems. We provide a detailed description of the NaNet hardware modular architecture. Benchmarks for latency and bandwidth for GbE and APElink channels are presented, followed by a performance analysis on the case study of the GPU-based low level trigger for the RICH detector in the NA62 CERN experiment, using either the NaNet GbE and APElink channels. Finally, we give an outline of project future activities.
NaNet: a flexible and configurable low-latency NIC for real-time trigger systems based on GPUs
Energy Technology Data Exchange (ETDEWEB)
Ammendola, R [INFN Sezione di Roma Tor Vergata, Via della Ricerca Scientifica, 1 - 00133 Roma (Italy); Biagioni, A; Frezza, O; Lonardo, A; Cicero, F Lo; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Sezione di Roma, P.le Aldo Moro, 2 - 00185 Roma (Italy); Lamanna, G; Pantaleo, F; Sozzi, M [INFN Sezione di Pisa, Via F. Buonarroti 2 - 56127 Pisa (Italy)
2014-02-01
NaNet is an FPGA-based PCIe X8 Gen2 NIC supporting 1/10 GbE links and the custom 34 Gbps APElink channel. The design has GPUDirect RDMA capabilities and features a network stack protocol offloading module, making it suitable for building low-latency, real-time GPU-based computing systems. We provide a detailed description of the NaNet hardware modular architecture. Benchmarks for latency and bandwidth for GbE and APElink channels are presented, followed by a performance analysis on the case study of the GPU-based low level trigger for the RICH detector in the NA62 CERN experiment, using either the NaNet GbE and APElink channels. Finally, we give an outline of project future activities.
Generalized phase retrieval algorithm based on information measures
Shioya, Hiroyuki; Gohara, Kazutoshi
2006-01-01
An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.
NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems
International Nuclear Information System (INIS)
Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero; Fantechi, Riccardo; Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco; Pontisso, Luca
2014-01-01
We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.
NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems
Energy Technology Data Exchange (ETDEWEB)
Ammendola, Roberto [INFN, Rome – Tor Vergata (Italy); Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero [INFN, Rome – Sapienza (Italy); Fantechi, Riccardo [CERN, Geneve (Switzerland); Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco [INFN, Pisa (Italy); Pontisso, Luca [University, Rome (Italy)
2014-06-11
We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.
NaNet:a low-latency NIC enabling GPU-based, real-time low level trigger systems
INSPIRE-00646837; Biagioni, Andrea; Fantechi, Riccardo; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Paolucci, Pier Stanislao; Pantaleo, Felice; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Tosoratto, Laura; Vicini, Piero
2014-01-01
We implemented the NaNet FPGA-based PCI2 Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.
All-electron ab initio investigations of the electronic states of the NiC molecule
DEFF Research Database (Denmark)
Shim, Irene; Gingerich, Karl. A.
1999-01-01
The low-lying electronic states of NiC are investigated by all-electron ab initio multi-configuration self-consistent-field (CASSCF) calculations including relativistic corrections. The electronic structure of NiC is interpreted as perturbed antiferromagnetic couplings of the localized angular...
Applying Groebner bases to solve reduction problems for Feynman integrals
International Nuclear Information System (INIS)
Smirnov, Alexander V.; Smirnov, Vladimir A.
2006-01-01
We describe how Groebner bases can be used to solve the reduction problem for Feynman integrals, i.e. to construct an algorithm that provides the possibility to express a Feynman integral of a given family as a linear combination of some master integrals. Our approach is based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. We illustrate it through various examples of reduction problems for families of one- and two-loop Feynman integrals. We also solve the reduction problem for a family of integrals contributing to the three-loop static quark potential
Applying Groebner bases to solve reduction problems for Feynman integrals
Energy Technology Data Exchange (ETDEWEB)
Smirnov, Alexander V. [Mechanical and Mathematical Department and Scientific Research Computer Center of Moscow State University, Moscow 119992 (Russian Federation); Smirnov, Vladimir A. [Nuclear Physics Institute of Moscow State University, Moscow 119992 (Russian Federation)
2006-01-15
We describe how Groebner bases can be used to solve the reduction problem for Feynman integrals, i.e. to construct an algorithm that provides the possibility to express a Feynman integral of a given family as a linear combination of some master integrals. Our approach is based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. We illustrate it through various examples of reduction problems for families of one- and two-loop Feynman integrals. We also solve the reduction problem for a family of integrals contributing to the three-loop static quark potential.
On the distribution of energy versus Alfvénic correlation for polar wind fluctuations
Directory of Open Access Journals (Sweden)
B. Bavassano
2006-11-01
Full Text Available Previous analyses have shown that polar wind fluctuations at MHD scales appear as a mixture of Alfvénic fluctuations and variations with an energy imbalance in favour of the magnetic term. In the present study, by separately examining the behaviour of kinetic and magnetic energies versus the Alfvénic correlation level, we unambiguously confirm that the second population is essentially related to a large increase of the magnetic energy with respect to that of the Alfvénic population. The relevant new result is that this magnetic population, though of secondary importance in terms of occurrence frequency, corresponds to a primary peak in the distribution of total energy. The fact that this holds in the case of polar wind, which is the least structured type of interplanetary plasma flow and with the slowest evolving Alfvénic turbulence, strongly suggests the general conclusion that magnetic structures cannot be neglected when modeling fluctuations for all kinds of wind regime.
Kinetics of Ni:C Thin Film Composition Formation at Different Temperatures and Fluxes
Directory of Open Access Journals (Sweden)
Gediminas KAIRAITIS
2013-09-01
Full Text Available In this work analysis considering Ni:C thin films growth on thermaly oxidized Si substrate by proposed kinetic model is presented. Model is built considering experimental results where microstructure evolution as a function of the substrate temperature and metal content of Ni:C nanocomposite films grown by hyperthermal ion deposition is investigated. The proposed kinetic model is based on the rate equations and includes processes of adsorption, surface segregation, diffusion, chemical reactions of constituents. The experimental depth profile curves were fitted by using proposed model. The obtained results show a good agreement with experiment taking into account concentration dependent diffusion. It is shown by modeling that with the increase of substrate temperature the process of nickel surface segregation becomes most important. DOI: http://dx.doi.org/10.5755/j01.ms.19.3.5234
Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory
Directory of Open Access Journals (Sweden)
Feng Hu
2012-01-01
Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.
Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
Sembiring, Pasukat
2017-12-01
Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.
The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...
NaNet: a configurable NIC bridging the gap between HPC and real-time HEP GPU computing
International Nuclear Information System (INIS)
Lonardo, A.; Ameli, F.; Biagioni, A.; Frezza, O.; Cicero, F. Lo; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Simeone, F.; Simula, F.; Tosoratto, L.; Vicini, P.; Ammendola, R.; Ramusino, A. Cotta; Fiorini, M.; Neri, I.; Lamanna, G.; Pontisso, L.; Sozzi, M.; Rossetti, D.
2015-01-01
NaNet is a FPGA-based PCIe Network Interface Card (NIC) design with GPUDirect and Remote Direct Memory Access (RDMA) capabilities featuring a configurable and extensible set of network channels. The design currently supports both standard—Gbe (1000BASE-T) and 10GbE (10Base-R)—and custom—34 Gbps APElink and 2.5 Gbps deterministic latency KM3link—channels, but its modularity allows for straightforward inclusion of other link technologies. The GPUDirect feature combined with a transport layer offload module and a data stream processing stage makes NaNet a low-latency NIC suitable for real-time GPU processing. In this paper we describe the NaNet architecture and its performances, exhibiting two of its use cases: the GPU-based low-level trigger for the RICH detector in the NA62 experiment at CERN and the on-/off-shore data transport system for the KM3NeT-IT underwater neutrino telescope
Directory of Open Access Journals (Sweden)
Mangal Singh
2017-12-01
Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.
Directory of Open Access Journals (Sweden)
Prajakta Desai
Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.
Desai, Prajakta; Loke, Seng W; Desai, Aniruddha
2017-01-01
Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.
Effects of Alfvénic Drift on Diffusive Shock Acceleration at Weak Cluster Shocks
Kang, Hyesung; Ryu, Dongsu
2018-03-01
Non-detection of γ-ray emission from galaxy clusters has challenged diffusive shock acceleration (DSA) of cosmic-ray (CR) protons at weak collisionless shocks that are expected to form in the intracluster medium. As an effort to address this problem, we here explore possible roles of Alfvén waves self-excited via resonant streaming instability during the CR acceleration at parallel shocks. The mean drift of Alfvén waves may either increase or decrease the scattering center compression ratio, depending on the postshock cross-helicity, leading to either flatter or steeper CR spectra. We first examine such effects at planar shocks, based on the transport of Alfvén waves in the small amplitude limit. For the shock parameters relevant to cluster shocks, Alfvénic drift flattens the CR spectrum slightly, resulting in a small increase of the CR acceleration efficiency, η. We then consider two additional, physically motivated cases: (1) postshock waves are isotropized via MHD and plasma processes across the shock transition, and (2) postshock waves contain only forward waves propagating along with the flow due to a possible gradient of CR pressure behind the shock. In these cases, Alfvénic drift could reduce η by as much as a factor of five for weak cluster shocks. For the canonical parameters adopted here, we suggest η ∼ 10‑4–10‑2 for shocks with sonic Mach number M s ≈ 2–3. The possible reduction of η may help ease the tension between non-detection of γ-rays from galaxy clusters and DSA predictions.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction
Directory of Open Access Journals (Sweden)
ZHAO Jiaojiao
2015-05-01
Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.
Directory of Open Access Journals (Sweden)
Mohammad S. Khan
2017-07-01
Full Text Available Transgenic Brassica napus harboring the synthetic chitinase (NiC gene exhibits broad-spectrum antifungal resistance. As the rhizosphere microorganisms play an important role in element cycling and nutrient transformation, therefore, biosafety assessment of NiC containing transgenic plants on soil ecosystem is a regulatory requirement. The current study is designed to evaluate the impact of NiC gene on the rhizosphere enzyme activities and microbial community structure. The transgenic lines with the synthetic chitinase gene (NiC showed resistance to Alternaria brassicicola, a common disease causing fungal pathogen. The rhizosphere enzyme analysis showed no significant difference in the activities of fivesoil enzymes: alkalyine phosphomonoestarase, arylsulphatase, β-glucosidase, urease and sucrase between the transgenic and non-transgenic lines of B. napus varieties, Durr-e-NIFA (DN and Abasyne-95 (AB-95. However, varietal differences were observed based on the analysis of molecular variance. Some individual enzymes were significantly different in the transgenic lines from those of non-transgenic but the results were not reproducible in the second trail and thus were considered as environmental effect. Genotypic diversity of soil microbes through 16S–23S rRNA intergenic spacer region amplification was conducted to evaluate the potential impact of the transgene. No significant diversity (4% for bacteria and 12% for fungal between soil microbes of NiC B. napus and the non-transgenic lines was found. However, significant varietal differences were observed between DN and AB-95 with 79% for bacterial and 54% for fungal diversity. We conclude that the NiC B. napus lines may not affect the microbial enzyme activities and community structure of the rhizosphere soil. Varietal differences might be responsible for minor changes in the tested parameters.
S-bases as a tool to solve reduction problems for Feynman integrals
International Nuclear Information System (INIS)
Smirnov, A.V.; Smirnov, V.A.
2006-01-01
We suggest a mathematical definition of the notion of master integrals and present a brief review of algorithmic methods to solve reduction problems for Feynman integrals based on integration by parts relations. In particular, we discuss a recently suggested reduction algorithm which uses Groebner bases. New results obtained with its help for a family of three-loop Feynman integrals are outlined
S-bases as a tool to solve reduction problems for Feynman integrals
Energy Technology Data Exchange (ETDEWEB)
Smirnov, A.V. [Scientific Research Computing Center of Moscow State University, Moscow 119992 (Russian Federation); Smirnov, V.A. [Nuclear Physics Institute of Moscow State University, Moscow 119992 (Russian Federation)
2006-10-15
We suggest a mathematical definition of the notion of master integrals and present a brief review of algorithmic methods to solve reduction problems for Feynman integrals based on integration by parts relations. In particular, we discuss a recently suggested reduction algorithm which uses Groebner bases. New results obtained with its help for a family of three-loop Feynman integrals are outlined.
Alfvénic fluctuations in "newborn"' polar solar wind
Directory of Open Access Journals (Sweden)
B. Bavassano
2005-06-01
Full Text Available The 3-D structure of the solar wind is strongly dependent upon the Sun's activity cycle. At low solar activity a bimodal structure is dominant, with a fast and uniform flow at the high latitudes, and slow and variable flows at low latitudes. Around solar maximum, in sharp contrast, variable flows are observed at all latitudes. This last kind of pattern, however, is a relatively short-lived feature, and quite soon after solar maximum the polar wind tends to regain its role. The plasma parameter distributions for these newborn polar flows appear very similar to those typically observed in polar wind at low solar activity. The point addressed here is about polar wind fluctuations. As is well known, the low-solar-activity polar wind is characterized by a strong flow of Alfvénic fluctuations. Does this hold for the new polar flows too? An answer to this question is given here through a comparative statistical analysis on parameters such as total energy, cross helicity, and residual energy, that are of general use to describe the Alfvénic character of fluctuations. Our results indicate that the main features of the Alfvénic fluctuations observed in low-solar-activity polar wind have been quickly recovered in the new polar flows developed shortly after solar maximum. Keywords. Interplanetary physics (MHD waves and turbulence; Sources of the solar wind – Space plasma physics (Turbulence
Venus' night side atmospheric dynamics using near infrared observations from VEx/VIRTIS and TNG/NICS
Mota Machado, Pedro; Peralta, Javier; Luz, David; Gonçalves, Ruben; Widemann, Thomas; Oliveira, Joana
2016-10-01
We present night side Venus' winds based on coordinated observations carried out with Venus Express' VIRTIS instrument and the Near Infrared Camera (NICS) of the Telescopio Nazionale Galileo (TNG). With NICS camera, we acquired images of the continuum K filter at 2.28 μm, which allows to monitor motions at the Venus' lower cloud level, close to 48 km altitude. We will present final results of cloud tracked winds from ground-based TNG observations and from coordinated space-based VEx/VIRTIS observations.The Venus' lower cloud deck is centred at 48 km of altitude, where fundamental dynamical exchanges that help maintain superrotation are thought to occur. The lower Venusian atmosphere is a strong source of thermal radiation, with the gaseous CO2 component allowing radiation to escape in windows at 1.74 and 2.28 μm. At these wavelengths radiation originates below 35 km and unit opacity is reached at the lower cloud level, close to 48 km. Therefore, it is possible to observe the horizontal cloud structure, with thicker clouds seen silhouetted against the bright thermal background from the low atmosphere. By continuous monitoring of the horizontal cloud structure at 2.28 μm (NICS Kcont filter), it is possible to determine wind fields using the technique of cloud tracking. We acquired a series of short exposures of the Venus disk. Cloud displacements in the night side of Venus were computed taking advantage of a phase correlation semi-automated technique. The Venus apparent diameter at observational dates was greater than 32" allowing a high spatial precision. The 0.13" pixel scale of the NICS narrow field camera allowed to resolve ~3-pixel displacements. The absolute spatial resolution on the disk was ~100 km/px at disk center, and the (0.8-1") seeing-limited resolution was ~400 km/px. By co-adding the best images and cross-correlating regions of clouds the effective resolution was significantly better than the seeing-limited resolution. In order to correct for
A Problem-Reduction Evolutionary Algorithm for Solving the Capacitated Vehicle Routing Problem
Directory of Open Access Journals (Sweden)
Wanfeng Liu
2015-01-01
Full Text Available Assessment of the components of a solution helps provide useful information for an optimization problem. This paper presents a new population-based problem-reduction evolutionary algorithm (PREA based on the solution components assessment. An individual solution is regarded as being constructed by basic elements, and the concept of acceptability is introduced to evaluate them. The PREA consists of a searching phase and an evaluation phase. The acceptability of basic elements is calculated in the evaluation phase and passed to the searching phase. In the searching phase, for each individual solution, the original optimization problem is reduced to a new smaller-size problem. With the evolution of the algorithm, the number of common basic elements in the population increases until all individual solutions are exactly the same which is supposed to be the near-optimal solution of the optimization problem. The new algorithm is applied to a large variety of capacitated vehicle routing problems (CVRP with customers up to nearly 500. Experimental results show that the proposed algorithm has the advantages of fast convergence and robustness in solution quality over the comparative algorithms.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....
Development and performance analysis of a lossless data reduction algorithm for voip
International Nuclear Information System (INIS)
Misbahuddin, S.; Boulejfen, N.
2014-01-01
VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)
Directory of Open Access Journals (Sweden)
Popa Ovidiu Cristian
2015-05-01
Full Text Available This article illustrates the concept of tourism potential, which includes all natural and human tourism resources which generate various forms of tourism. Slănic Moldova town is in a great development, being sustained by the glorious oldtime image: ”Moldova‟s Pearl”. The recent accomplishments, the implementation projects and the short and medium time investment programs aim not only to affirm the resort at a regional level, but to transform it in to an authentic “Romanian tourism pearl”. Developing Slănic Moldova town will aim to develop its natural resources. For the years to come, it is willing to sustain a long-lasting economy especially based on touristic services at a European level, but also on diversifying the local economic activities, in respect for the nature and permanent environment preoccupation. In order to reach certain values the contribution of all factors that can determine the town‟s socio-economic development are needed: the local community and the local‟s support, keeping the environment intact and not the least increasing the number of tourists. Slănic Moldova will be one of the main touristic balneoclimatheric mountain destinations in Romania having a diverse and attractive touristic offer during the entire year, high quality touristic services, in an exceptional, pollution free, natural environment. Slănic Moldova will pass through an essential stage of its development, in which the national and external touristic context will be redefined. Being guided by the reputation of „Moldova‟s Pearl”, Slănic Moldova will develop its mineral waters and great natural environment extraordinary potential, thus becoming the great „pearl of Romanian tourism”.
Environmental Optimization Using the WAste Reduction Algorithm (WAR)
Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...
SUNWARD-PROPAGATING ALFVÉNIC FLUCTUATIONS OBSERVED IN THE HELIOSPHERE
Energy Technology Data Exchange (ETDEWEB)
Li, Hui; Wang, Chi [State Key Laboratory of Space Weather, National Space Science Center, CAS, Beijing, 100190 (China); Belcher, John W.; Richardson, John D. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA (United States); He, Jiansen, E-mail: hli@spaceweather.ac.cn [School of Earth and Space Sciences, Peking University, Beijing, 100871 (China)
2016-06-10
The mixture/interaction of anti-sunward-propagating Alfvénic fluctuations (AFs) and sunward-propagating Alfvénic fluctuations (SAFs) is believed to result in the decrease of the Alfvénicity of solar wind fluctuations with increasing heliocentric distance. However, SAFs are rarely observed at 1 au and solar wind AFs are found to be generally outward. Using the measurements from Voyager 2 and Wind , we perform a statistical survey of SAFs in the heliosphere inside 6 au. We first report two SAF events observed by Voyager 2 . One is in the anti-sunward magnetic sector with a strong positive correlation between the fluctuations of magnetic field and solar wind velocity. The other one is in the sunward magnetic sector with a strong negative magnetic field—velocity correlation. Statistically, the percentage of SAFs increases gradually with heliocentric distance, from about 2.7% at 1.0 au to about 8.7% at 5.5 au. These results provide new clues for understanding the generation mechanism of SAFs.
Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.
2017-12-01
This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.
A DFT-based genetic algorithm search for AuCu nanoalloy electrocatalysts for CO2 reduction
DEFF Research Database (Denmark)
Lysgaard, Steen; Mýrdal, Jón Steinar Garðarsson; Hansen, Heine Anton
2015-01-01
Using a DFT-based genetic algorithm (GA) approach, we have determined the most stable structure and stoichiometry of a 309-atom icosahedral AuCu nanoalloy, for potential use as an electrocatalyst for CO2 reduction. The identified core–shell nano-particle consists of a copper core interspersed....... This shows that the mixed Cu135@Au174 core–shell nanoalloy has a similar adsorption energy, for the most favorable site, as a pure gold nano-particle. Cu, however, has the effect of stabilizing the icosahedral structure because Au particles are easily distorted when adding adsorbates....... that it is possible to use the LCAO mode to obtain a realistic estimate of the molecular chemisorption energy for systems where the computation in normal grid mode is not computationally feasible. These corrections are employed when calculating adsorption energies on the Cu, Au and most stable mixed particles...
Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich
Praagman, C.
1996-01-01
Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by
Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P
2007-01-01
The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.
Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis
2006-01-01
The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.
Mapping the nursing care with the NIC for patients in risk for pressure ulcer
Directory of Open Access Journals (Sweden)
Ana Gabriela Silva Pereira
2014-06-01
Full Text Available Objective:To identify the nursing care prescribed for patients in risk for pressure ulcer (PU and to compare those with the Nursing Interventions Classification (NIC interventions. Method: Cross mapping study conducted in a university hospital. The sample was composed of 219 adult patients hospitalized in clinical and surgical units. The inclusion criteria were: score ≤ 13 in the Braden Scale and one of the nursing diagnoses, Self-Care deficit syndrome, Impaired physical mobility, Impaired tissue integrity, Impaired skin integrity, Risk for impaired skin integrity. The data were collected retrospectively in a nursing prescription system and statistically analyzed by crossed mapping. Result: It was identified 32 different nursing cares to prevent PU, mapped in 17 different NIC interventions, within them: Skin surveillance, Pressure ulcer prevention and Positioning. Conclusion: The cross mapping showed similarities between the prescribed nursing care and the NIC interventions.
LTREE - a lisp-based algorithm for cutset generation using Boolean reduction
International Nuclear Information System (INIS)
Finnicum, D.J.; Rzasa, P.W.
1985-01-01
Fault tree analysis is an important tool for evaluating the safety of nuclear power plants. The basic objective of fault tree analysis is to determine the probability that an undesired event or combination of events will occur. Fault tree analysis involves four main steps: (1) specifying the undesired event or events; (2) constructing the fault tree which represents the ways in which the postulated event(s) could occur; (3) qualitative evaluation of the logic model to identify the minimal cutsets; and (4) quantitative evaluation of the logic model to determine the probability that the postulated event(s) will occur given the probability of occurrence for each individual fault. This paper describes a LISP-based algorithm for the qualitative evaluation of fault trees. Development of this algorithm is the first step in a project to apply expert systems technology to the automation of the fault tree analysis process. The first section of this paper provides an overview of LISP and its capabilities, the second section describes the LTREE algorithm and the third section discusses the on-going research areas
Ensemble Classification of Data Streams Based on Attribute Reduction and a Sliding Window
Directory of Open Access Journals (Sweden)
Yingchun Chen
2018-04-01
Full Text Available With the current increasing volume and dimensionality of data, traditional data classification algorithms are unable to satisfy the demands of practical classification applications of data streams. To deal with noise and concept drift in data streams, we propose an ensemble classification algorithm based on attribute reduction and a sliding window in this paper. Using mutual information, an approximate attribute reduction algorithm based on rough sets is used to reduce data dimensionality and increase the diversity of reduced results in the algorithm. A double-threshold concept drift detection method and a three-stage sliding window control strategy are introduced to improve the performance of the algorithm when dealing with both noise and concept drift. The classification precision is further improved by updating the base classifiers and their nonlinear weights. Experiments on synthetic datasets and actual datasets demonstrate the performance of the algorithm in terms of classification precision, memory use, and time efficiency.
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
Shibli, Hussain J.
2013-06-01
Opportunistic schedulers rely on the feedback of all users in order to schedule a set of users with favorable channel conditions. While the downlink channels can be easily estimated at all user terminals via a single broadcast, several key challenges are faced during uplink transmission. First of all, the statistics of the noisy and fading feedback channels are unknown at the base station (BS) and channel training is usually required from all users. Secondly, the amount of network resources (air-time) required for feedback transmission grows linearly with the number of users. In this paper, we tackle the above challenges and propose a Bayesian based scheduling algorithm that 1) reduces the air-time required to identify the strong users, and 2) is agnostic to the statistics of the feedback channels and utilizes the a priori statistics of the additive noise to identify the strong users. Numerical results show that the proposed algorithm reduces the feedback air-time while improving detection in the presence of fading and noisy channels when compared to recent compressed sensing based algorithms. Furthermore, the proposed algorithm achieves a sum-rate throughput close to that obtained by noiseless dedicated feedback systems. © 2013 IEEE.
Image noise reduction algorithm for digital subtraction angiography: clinical results.
Söderman, Michael; Holmin, Staffan; Andersson, Tommy; Palmgren, Charlotta; Babic, Draženko; Hoornaert, Bart
2013-11-01
To test the hypothesis that an image noise reduction algorithm designed for digital subtraction angiography (DSA) in interventional neuroradiology enables a reduction in the patient entrance dose by a factor of 4 while maintaining image quality. This clinical prospective study was approved by the local ethics committee, and all 20 adult patients provided informed consent. DSA was performed with the default reference DSA program, a quarter-dose DSA program with modified acquisition parameters (to reduce patient radiation dose exposure), and a real-time noise-reduction algorithm. Two consecutive biplane DSA data sets were acquired in each patient. The dose-area product (DAP) was calculated for each image and compared. A randomized, blinded, offline reading study was conducted to show noninferiority of the quarter-dose image sets. Overall, 40 samples per treatment group were necessary to acquire 80% power, which was calculated by using a one-sided α level of 2.5%. The mean DAP with the quarter-dose program was 25.3% ± 0.8 of that with the reference program. The median overall image quality scores with the reference program were 9, 13, and 12 for readers 1, 2, and 3, respectively. These scores increased slightly to 12, 15, and 12, respectively, with the quarter-dose program imaging chain. In DSA, a change in technique factors combined with a real-time noise-reduction algorithm will reduce the patient entrance dose by 75%, without a loss of image quality. RSNA, 2013
Swift heavy ion irradiation effects in Pt/C and Ni/C multilayers
Gupta, Ajay; Pandita, Suneel; Avasthi, D. K.; Lodha, G. S.; Nandedkar, R. V.
1998-12-01
Irradiation effects of 100 MeV Ag ion irradiation on Ni/C and Pt/C multilayers have been studied using X-ray reflectivity measurements. Modifications are observed in both the multilayers at (dE/dx)e values much below the threshold values for Ni and Pt. This effect is attributed to the discontinuous nature of the metal layers. In both the multilayers interfacial roughness increases with irradiation dose. While Ni/C multilayers exhibit large ion-beam induced intermixing, no observable intermixing is observed in the case of Pt/C multilayer. This difference in the behavior of the two systems suggests a significant role for chemically guided defect motion in the mixing process associated with swift heavy ion irradiation.
Comparison of order reduction algorithms for application to electrical networks
Directory of Open Access Journals (Sweden)
Lj. Radić-Weissenfeld
2009-05-01
Full Text Available This paper addresses issues related to the minimization of the computational burden in terms of both memory and speed during the simulation of electrical models. In order to achieve a simple and computational fast model the order reduction of its reducible part is proposed. In this paper the overview of the order reduction algorithms and their application are discussed.
A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model
Directory of Open Access Journals (Sweden)
Jason Chin-Tiong Chan
2018-01-01
Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.
Coherency Identification of Generators Using a PAM Algorithm for Dynamic Reduction of Power Systems
Directory of Open Access Journals (Sweden)
Seung-Il Moon
2012-11-01
Full Text Available This paper presents a new coherency identification method for dynamic reduction of a power system. To achieve dynamic reduction, coherency-based equivalence techniques divide generators into groups according to coherency, and then aggregate them. In order to minimize the changes in the dynamic response of the reduced equivalent system, coherency identification of the generators should be clearly defined. The objective of the proposed coherency identification method is to determine the optimal coherent groups of generators with respect to the dynamic response, using the Partitioning Around Medoids (PAM algorithm. For this purpose, the coherency between generators is first evaluated from the dynamic simulation time response, and in the proposed method this result is then used to define a dissimilarity index. Based on the PAM algorithm, the coherent generator groups are then determined so that the sum of the index in each group is minimized. This approach ensures that the dynamic characteristics of the original system are preserved, by providing the optimized coherency identification. To validate the effectiveness of the technique, simulated cases with an IEEE 39-bus test system are evaluated using PSS/E. The proposed method is compared with an existing coherency identification method, which uses the K-means algorithm, and is found to provide a better estimate of the original system.
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Passive Classification of Wireless NICs during Rate Switching
Beyah RaheemA; Copeland JohnA; Corbett CheritaL
2008-01-01
Abstract Computer networks have become increasingly ubiquitous. However, with the increase in networked applications, there has also been an increase in difficulty to manage and secure these networks. The proliferation of 802.11 wireless networks has heightened this problem by extending networks beyond physical boundaries. We propose the use of spectral analysis to identify the type of wireless network interface card (NIC). This mechanism can be applied to support the detection of unauthorize...
Directory of Open Access Journals (Sweden)
Min Liu
2018-03-01
Full Text Available Sidelobe reduction is a very primary task for synthetic aperture radar (SAR images. Various methods have been proposed for broadside SAR, which can suppress the sidelobes effectively while maintaining high image resolution at the same time. Alternatively, squint SAR, especially highly squint SAR, has emerged as an important tool that provides more mobility and flexibility and has become a focus of recent research studies. One of the research challenges for squint SAR is how to resolve the severe range-azimuth coupling of echo signals. Unlike broadside SAR images, the range and azimuth sidelobes of the squint SAR images no longer locate on the principal axes with high probability. Thus the spatially variant apodization (SVA filters could hardly get all the sidelobe information, and hence the sidelobe reduction process is not optimal. In this paper, we present an improved algorithm called double spatially variant apodization (D-SVA for better sidelobe suppression. Satisfactory sidelobe reduction results are achieved with the proposed algorithm by comparing the squint SAR images to the broadside SAR images. Simulation results also demonstrate the reliability and efficiency of the proposed method.
International Nuclear Information System (INIS)
Samei, Ehsan; Richard, Samuel
2015-01-01
indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks
Energy Technology Data Exchange (ETDEWEB)
Samei, Ehsan, E-mail: samei@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Clinical Imaging Physics Group, Departments of Radiology, Physics, Biomedical Engineering, and Electrical and Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University, Durham, North Carolina 27710 (United States)
2015-01-15
indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Alfvénic instabilities driven by runaways in fusion plasmas
International Nuclear Information System (INIS)
Fülöp, T.; Newton, S.
2014-01-01
Runaway particles can be produced in plasmas with large electric fields. Here, we address the possibility that such runaway ions and electrons excite Alfvénic instabilities. The magnetic perturbation induced by these modes can enhance the loss of runaways. This may have important implications for the runaway electron beam formation in tokamak disruptions
Energy Technology Data Exchange (ETDEWEB)
Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)
2016-01-11
Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.
International Nuclear Information System (INIS)
Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young
2014-01-01
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)
GUN CONTROL: Potential Effects of Next-Day Destruction of NICS Background Check Records
National Research Council Canada - National Science Library
2002-01-01
.... Under the Brady Handgun Violence Prevention Act, licensed dealers generally are not to transfer firearms to an individual until a NICS search determines that the transfer will not violate applicable...
Directory of Open Access Journals (Sweden)
K. Lenin
2014-04-01
Full Text Available This paper presents Hybrid Biogeography algorithm for solving the multi-objective reactive power dispatch problem in a power system. Real Power Loss minimization and maximization of voltage stability margin are taken as the objectives. Artificial bee colony optimization (ABC is quick and forceful algorithm for global optimization. Biogeography-Based Optimization (BBO is a new-fangled biogeography inspired algorithm. It mainly utilizes the biogeography-based relocation operator to share the information among solutions. In this work, a hybrid algorithm with BBO and ABC is projected, and named as HBBABC (Hybrid Biogeography based Artificial Bee Colony Optimization, for the universal numerical optimization problem. HBBABC merge the searching behavior of ABC with that of BBO. Both the algorithms have different solution probing tendency like ABC have good exploration probing tendency while BBO have good exploitation probing tendency. HBBABC used to solve the reactive power dispatch problem and the proposed technique has been tested in standard IEEE30 bus test system.
Fast Reduction Method in Dominance-Based Information Systems
Li, Yan; Zhou, Qinghua; Wen, Yongchuan
2018-01-01
In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures
Ltaief, Hatem
2013-04-01
This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.
Output Current Ripple Reduction Algorithms for Home Energy Storage Systems
Directory of Open Access Journals (Sweden)
Jin-Hyuk Park
2013-10-01
Full Text Available This paper proposes an output current ripple reduction algorithm using a proportional-integral (PI controller for an energy storage system (ESS. In single-phase systems, the DC/AC inverter has a second-order harmonic at twice the grid frequency of a DC-link voltage caused by pulsation of the DC-link voltage. The output current of a DC/DC converter has a ripple component because of the ripple of the DC-link voltage. The second-order harmonic adversely affects the battery lifetime. The proposed algorithm has an advantage of reducing the second-order harmonic of the output current in the variable frequency system. The proposed algorithm is verified from the PSIM simulation and experiment with the 3 kW ESS model.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Energy Technology Data Exchange (ETDEWEB)
Matteini, L.; Horbury, T. S.; Schwartz, S. J. [The Blackett Laboratory, Imperial College London, SW7 2AZ (United Kingdom); Pantellini, F. [LESIA, Observatoire de Paris, CNRS, UPMC, Universit Paris-Diderot, 5 Place Jules Janssen, F-92195 Meudon (France); Velli, M. [Department of Earth, Planetary, and Space Sciences, UCLA, California (United States)
2015-03-20
We investigate the properties of plasma fluid motion in the large-amplitude, low-frequency fluctuations of highly Alfvénic fast solar wind. We show that protons locally conserve total kinetic energy when observed from an effective frame of reference comoving with the fluctuations. For typical properties of the fast wind, this frame can be reasonably identified by alpha particles which, due to their drift with respect to protons at about the Alfvén speed along the magnetic field, do not partake in the fluid low-frequency fluctuations. Using their velocity to transform the proton velocity into the frame of Alfvénic turbulence, we demonstrate that the resulting plasma motion is characterized by a constant absolute value of the velocity, zero electric fields, and aligned velocity and magnetic field vectors as expected for unidirectional Alfvénic fluctuations in equilibrium. We propose that this constraint, via the correlation between velocity and magnetic field in Alfvénic turbulence, is the origin of the observed constancy of the magnetic field; while the constant velocity corresponding to constant energy can only be observed in the frame of the fluctuations, the corresponding constant total magnetic field, invariant for Galilean transformations, remains the observational signature in the spacecraft frame of the constant total energy in the Alfvén turbulence frame.
Test and data reduction algorithm for the evaluation of lead-acid battery packs
Energy Technology Data Exchange (ETDEWEB)
Nowak, D.
1986-01-15
Experience from the DOE Electric Vehicle Demonstration Project indicated severe battery problems associated with driving electric cars in temperature extremes. The vehicle batteries suffered from a high module failure rate, reduced capacity, and low efficiency. To assess the nature and the extent of the battery problems encountered at various operating temperatures, a test program was established at the University of Alabama in Huntsville (UAH). A test facility was built that is based on Propel cycling equipment, the Hewlett Packard 3497A Data Acquisition System, and the HP85F and HP87 computers. The objective was to establish a cost effective facility that could generate the engineering data base needed for the development of thermal management systems, destratification systems, central watering systems and proper charge algorithms. It was hoped that the development and implementation of these systems by EV manufacturers and fleet operators of EVs would eliminate the most pressing problems that occurred in the DOE EV Demonstration Project. The data reduction algorithm is described.
A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic
Directory of Open Access Journals (Sweden)
Bogdan F. Ion
2014-09-01
Full Text Available Nicotinamidase (Nic is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic. In particular, density functional theory (DFT, molecular dynamics (MD and ONIOM quantum mechanics/molecular mechanics (QM/MM methods have been employed. The overall mechanism occurs in two stages: (i formation of a thioester enzyme-intermediate (IC2 and (ii hydrolysis of the thioester bond to give the products. The polar protein environment has a significant effect in stabilizing reaction intermediates and in particular transition states. As a result, both stages effectively occur in one step with Stage 1, formation of IC2, being rate limiting barrier with a cost of 53.5 kJ•mol−1 with respect to the reactant complex, RC. The effects of dispersion interactions on the overall mechanism were also considered but were generally calculated to have less significant effects with the overall mechanism being unchanged. In addition, the active site lysyl (Lys103 is concluded to likely play a role in stabilizing the thiolate of Cys136 during the reaction.
Ion, Bogdan F; Kazim, Erum; Gauld, James W
2014-09-29
Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The overall mechanism occurs in two stages: (i) formation of a thioester enzyme-intermediate (IC2) and (ii) hydrolysis of the thioester bond to give the products. The polar protein environment has a significant effect in stabilizing reaction intermediates and in particular transition states. As a result, both stages effectively occur in one step with Stage 1, formation of IC2, being rate limiting barrier with a cost of 53.5 kJ·mol-1 with respect to the reactant complex, RC. The effects of dispersion interactions on the overall mechanism were also considered but were generally calculated to have less significant effects with the overall mechanism being unchanged. In addition, the active site lysyl (Lys103) is concluded to likely play a role in stabilizing the thiolate of Cys136 during the reaction.
Arpaia, P; Inglese, V
2010-01-01
A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...
Building 1D resonance broadened quasilinear (RBQ) code for fast ions Alfvénic relaxations
Gorelenkov, Nikolai; Duarte, Vinicius; Berk, Herbert
2016-10-01
The performance of the burning plasma is limited by the confinement of superalfvenic fusion products, e.g. alpha particles, which are capable of resonating with the Alfvénic eigenmodes (AEs). The effect of AEs on fast ions is evaluated using a resonance line broadened diffusion coefficient. The interaction of fast ions and AEs is captured for cases where there are either isolated or overlapping modes. A new code RBQ1D is being built which constructs diffusion coefficients based on realistic eigenfunctions that are determined by the ideal MHD code NOVA. The wave particle interaction can be reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. Hence to a good approximation the Quasi-Linear (QL) diffusion equation only contains derivatives in the angular momentum. The diffusion equation is then one dimensional that is efficiently solved simultaneously for all particles with the equation for the evolution of the wave angular momentum. The evolution of fast ion constants of motion is governed by the QL diffusion equations which are adapted to find the ion distribution function.
Reliability research on nuclear I and C system at KAIST NIC laboratory
International Nuclear Information System (INIS)
Seong, Poong-Hyun
1996-01-01
As the use of computer systems becomes popular in nuclear industry, reliability assurance of digitized nuclear instrumentation and control systems is becoming one of hot issues. Some issues on this are S/W verification and validation, reliability estimation of digital systems, development strategy of high integrity knowledge base for expert systems, and so on. In order to address these issues, the Nuclear Instrumentation and Control (NIC) laboratory at KAIST is conducting some research projects. This paper describes some highlights of these research activities. The final goal of these research activities is to develop some useful methodologies and tools for development of dependable digital nuclear instrument and control systems. (author)
International Nuclear Information System (INIS)
Ranganathan, Vaitheeswaran; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Gupta, Kamlesh K.; Basu, Sumit; Maiya, Vikram; Joseph, Jolly; Nirhali, Amit
2010-01-01
This study aims to evaluate the performance of a new algorithm for optimization of beam weights in anatomy-based intensity modulated radiotherapy (IMRT). The algorithm uses a numerical technique called Gaussian-Elimination that derives the optimum beam weights in an exact or non-iterative way. The distinct feature of the algorithm is that it takes only fraction of a second to optimize the beam weights, irrespective of the complexity of the given case. The algorithm has been implemented using MATLAB with a Graphical User Interface (GUI) option for convenient specification of dose constraints and penalties to different structures. We have tested the numerical and clinical capabilities of the proposed algorithm in several patient cases in comparison with KonRad inverse planning system. The comparative analysis shows that the algorithm can generate anatomy-based IMRT plans with about 50% reduction in number of MUs and 60% reduction in number of apertures, while producing dose distribution comparable to that of beamlet-based IMRT plans. Hence, it is clearly evident from the study that the proposed algorithm can be effectively used for clinical applications. (author)
Chung, King; Zeng, Fan-Gang; Acker, Kyle N
2006-10-01
Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.
Lee, Eunjoo; Park, Hyejin; Nam, Mihwa; Whyte, James
2011-01-01
The purpose of the study was to identify Nursing Interventions Classification (NIC) interventions performed by Korean school nurses. The Korean data were then compared to U.S. data from other studies in order to identify differences and similarities between Korean and U.S. school nurse practice. Of the 542 available NIC interventions, 180 were…
Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm
Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven
2010-05-01
Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.
Alfvénic Dynamics and Fine Structuring of Discrete Auroral Arcs: Swarm and e-POP Observations
Miles, D.; Mann, I. R.; Pakhotin, I.; Burchill, J. K.; Howarth, A. D.; Knudsen, D. J.; Wallis, D. D.; Yau, A. W.; Lysak, R. L.
2017-12-01
The electrodynamics associated with dual discrete arc aurora with anti-parallel flow along the arcs were observed nearly simultaneously by the enhanced Polar Outflow Probe (e-POP) and the Swarm A and C spacecraft. Auroral imaging from e-POP reveal 1-10 km structuring of the arcs, which move and evolve on second timescales and confound the traditional single-spacecraft field-aligned current algorithms. High-cadence magnetic data from e-POP shows 1-10 Hz, presumably Alfvénic perturbations co-incident with and at the same scale size as the observed dynamic auroral fine structures. High-cadence electric and magnetic field data from Swarm A reveals non-stationary electrodynamics involving reflected and interfering Alfvén waves and signatures of modulation consistent with trapping in the Ionospheric Alfvén Resonator (IAR). Together, these observations suggest a role for Alfven waves, perhaps also the IAR, in discrete arc dynamics on 0.2 - 10s timescales and 1-10 km spatial scales.
Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi
2011-04-01
The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.
Grañó Martí, Sílvia
2015-01-01
Pregunta clínica: És més efectiu el tractament convencional del dolor crònic lumbar no específic en l'edat adulta si li afegim el tractament amb teràpia craneosacral? Objectiu: Valorar l'efectivitat de la teràpia craneosacral en persones en edat adulta amb dolor crònic lumbar no específic. Metodologia: Es realitzarà un estudi experimental a partir d'un assaig clínic aleatoritzat i amb un simple cec. Es durà a terme durant l'any 2016 i mitjans de 2017 a la població de Lleida. La mostra e...
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Alfvénic waves in polar spicules
Tavabi, E.; Koutchmy, S.; Ajabshirizadeh, A.; Ahangarzadeh Maralani, A. R.; Zeighami, S.
2015-01-01
Context. For investigating spicules from the photosphere to coronal heights, the new Hinode/SOT long series of high-resolution observations from space taken in CaII H line emission offers an improved way to look at their remarkable dynamical behavior using images free of seeing effects. They should be put in the context of the huge amount of already accumulated material from ground-based instruments, including high- resolution spectra of off-limb spicules. Aims: Both the origin of the phenomenon and the significance of dynamical spicules for the heating above the top of the photosphere and the fuelling of the chromospheric and the transition region need more investigation, including of the possible role of the associated magnetic waves for the corona higher up. Methods: We analyze in great detail the proper transverse motions of mature and tall polar region spicules for different heights, assuming that there might be Helical-Kink waves or Alfvénic waves propagating inside their multicomponent substructure, by interpreting the quasi-coherent behavior of all visible components presumably confined by a surrounding magnetic envelop. We concentrate the analysis on the taller CaII spicules more relevant for coronal heights and easier to measure. Two-dimensional velocity maps of proper motion were computed for the first time using a correlation tracking technique based on FFTs and cross-correlation function with a 2nd-order-accuracy Taylor expansion. Highly processed images with the popular mad-max algorithm were first prepared to perform this analysis. The locations of the peak of the cross-correlation function were obtained with subpixel accuracy. Results: The surge-like behavior of solar polar region spicules supports the untwisting multicomponent interpretation of spicules exhibiting helical dynamics. Several tall spicules are found with (i) upward and downward flows that are similar at lower and middle levels, the rate of upward motion being slightly higher at high
La responsabilitat davant la intel·ligència artificial en el comerç electrònic
Martín i Palomas, Elisabet
2015-01-01
Es planteja en aquesta tesi l'efecte produït sobre la responsabilitat derivada de les accions realitzades autònomament per sistemes dotats d'intel·ligència artificial, sense la participació directa de cap ésser humà, en els temes més directament relacionats amb el comerç electrònic. Per a això s'analitzen les activitats realitzades per algunes de les principals empreses internacionals de comerç electrònic, com el grup nord-americà eBay o el grup xinès Alibaba. Després de desenvolupar els prin...
Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs
Directory of Open Access Journals (Sweden)
Enrico Creaco
2018-04-01
Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.
Directory of Open Access Journals (Sweden)
Ion LUNGU
2012-01-01
Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.
Development and evaluation of thermal model reduction algorithms for spacecraft
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Álvarez Baldó, Andrea
2010-01-01
LINKARE es una plataforma de Telemedicina experimental. La primera fase del proyecto está diseñada y desarrollada. Los trabajos a realizar consisten en la implantación de la plataforma LINKCARE en un entorno real, inicialmente de preproducción y posteriromente de producción, en el Hospital Clínic, por ello el proyecto se desarrollará en esta institución.
NIC (Nuclear Industry in China) exhibition. Press file
International Nuclear Information System (INIS)
1998-01-01
Framatome participated to the NIC exhibition which took place in Beijing (China) on March 1998. This press dossier was distributed to visitors. It presents in a first part the activities of the Framatome group in people's republic of China (new constructions (Daya Bay, Ling Ao project), technological cooperation and contracts in the nuclear domain, technology transfers in the domain of nuclear fuels, activities and daughter companies in the domain of industrial equipments, Framatome Connectors International (FCI) daughter company in the domain of connectors engineering). Then, the general activities of Framatome in the nuclear, industrial equipment, and connectors engineering domains are summarized in the next 3 parts. (J.S.)
Sofue, Keitaro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki; Sugimura, Kazuro
2017-07-01
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P small metal implants by reducing metallic artefacts. • SEMAR algorithm significantly reduces metallic artefacts from small implants in abdominal CT. • SEMAR can improve image quality of the liver in dynamic CECT. • Confidence visualization of hepatic vascular anatomies can also be improved by SEMAR.
Liu, Song; An, Cuihua; Zang, Lei; Chang, Xiaoya; Guo, Huinan; Jiao, Lifang; Wang, Yijing
2018-04-16
A 3D flower-like mesoporous Ni@C composite material has been synthesized by using a facile and economical one-pot hydrothermal method. This unique 3D flower-like Ni@C composite, which exhibited a high surface area (522.4 m 2 g -1 ), consisted of highly dispersed Ni nanoparticles on mesoporous carbon flakes. The effect of calcination temperature on the electrochemical performance of the Ni@C composite was systematically investigated. The optimized material (Ni@C 700) displayed high specific capacity (1306 F g -1 at 2 A g -1 ) and excellent cycling performance (96.7 % retention after 5000 cycles). Furthermore, an asymmetric supercapacitor (ASC) that contained Ni@C 700 as cathode and mesoporous carbon (MC) as anode demonstrated high energy density (60.4 W h kg -1 at a power density of 750 W kg -1 ). © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Pawel Rouba Billowicz
2016-04-01
Full Text Available “Combats Escènics” és un treball que tracta sobre la interpretació artística de la violència d’artistes de l’espectacle per tal de divertir el públic i emetre un missatge humanista mitjançant una coreografia ritual. En aquest estudi es presenta una classificació del combat escènic des del doble vessant agonista/antagonista, es realitza un passeig històric de la representació artística del combat a través de les diferents etapes i de les diverses cultures, s’aborda la preparació escènica de l’actor i del coreògraf, i s’entreveuen les perspectives de futur d’aquesta modalitat artística. Estudi realitzat per Pawel Rouba Billewicz (Inowroclaw, Polònia, 1939 - Barcelona, 2007, director, coreògraf, actor, mestre d’armes, mestre del gest i de la pantomima i professor de l’INEF de Catalunya. Aquest article, editorialment inèdit, es publica postmortem per Apunts. Educació Física i Esports com a homenatge i reconeixement de l’autor per la seva extraordinària i polivalent aportació al camp de l’art i l’Activitat Física i l’Esport.
A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...
International Nuclear Information System (INIS)
Arpaia, Pasquale; Buzio, Marco; Inglese, Vitaliano
2010-01-01
A real-time algorithm of data reduction, based on the combination of two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by the post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement systems for testing magnets of the Large Hadron Collider at the European Organization for Nuclear Research (CERN) are reported
Opposition-Based Adaptive Fireworks Algorithm
Directory of Open Access Journals (Sweden)
Chibing Gong
2016-07-01
Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.
Opposition-Based Adaptive Fireworks Algorithm
Chibing Gong
2016-01-01
A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...
Mapeamento das ações de enfermagem do CIPESC às intervenções de enfermagem da NIC
Directory of Open Access Journals (Sweden)
Tânia Couto Machado Chianca
2003-10-01
Full Text Available Termos utilizados num instrumento do projeto de Classificação Internacional da Prática em Saúde Coletiva (CIPESC no Brasil foram analisados à luz das intervenções de enfermagem estabelecidas na Classificação de Intervenções de Enfermagem (NIC para determinar se elas podiam representar a prática de enfermagem no Brasil. Um processo de três passos foi empregado para fazer a ligação entre os termos e uma análise descritiva foi conduzida. Concluiu-se que a NIC pode ser útil no Brasil.
Sòcrates l'impenetrable i l'amor platònic
Pàmias i Massana, Jordi
2013-01-01
Un investigador de la UAB ha publicat un article en què es reinterpreta un passatge del diàleg El Banquet de Plató per establir que la fortalesa del filòsof Sòcrates radica en la seva capacitat de resistir la seducció. Així, Sòcrates coneix el desig, però només l'exerceix quan es tracta de veritable amor. Aquesta nova interpretació permet apropar-nos al nucli de la doctrina de l'amor platònic.
Dynamic route guidance algorithm based algorithm based on artificial immune system
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.
Abu-Raiya, Hisham
2014-04-01
In this paper, comparisons are made between a newly developed Qura'nic theory of personality and the Freudian and Jungian theories of the mind. Notable similarities were found between the Freudian id, ego, superego and neurosis and the Qura'nic nafs ammarah besoa' (evil-commanding psyche), a'ql (intellect), al-nafs al-lawammah (the reproachful psyche) and al-nafs al-marid'a (the sick psyche), respectively. Noteworthy resemblances were detected also between the Jungian concepts collective unconscious, archetypes, Self and individuation and the Qura'nic constructs roh (spirit), al-asmaa' (the names), qalb (heart), and al-nafs al-mutmainnah (the serene psyche), respectively. These parallels, as well as the departure points, between the models are thoroughly discussed and analyzed. The comparisons performed in this paper open new avenues for dialogue between western models of the psyche and their Muslim counterparts, a dialogue that can enrich both perspectives and advance the field of psychology.
Energy Technology Data Exchange (ETDEWEB)
Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto (Japan); Funama, Yoshinori [Kumamoto University, Department of Medical Physics, Faculty of Life Sciences, Kumamoto (Japan); Kawano, Takayuki [Kumamoto University Graduate School, Department of Neurosurgery, Faculty of Life Sciences Research, Kumamoto (Japan)
2016-05-15
We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)
International Nuclear Information System (INIS)
Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki; Funama, Yoshinori; Kawano, Takayuki
2016-01-01
We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
Directory of Open Access Journals (Sweden)
Jianfeng Guan
2017-01-01
Full Text Available The existing spray-based routing algorithms in DTN cannot dynamically adjust the number of message copies based on actual conditions, which results in a waste of resource and a reduction of the message delivery rate. Besides, the existing spray-based routing protocols may result in blind spots or dead end problems due to the limitation of various given metrics. Therefore, this paper proposes a social relationship based adaptive multiple spray-and-wait routing algorithm (called SRAMSW which retransmits the message copies based on their residence times in the node via buffer management and selects forwarders based on the social relationship. By these means, the proposed algorithm can remove the plight of the message congestion in the buffer and improve the probability of replicas to reach their destinations. The simulation results under different scenarios show that the SRAMSW algorithm can improve the message delivery rate and reduce the messages’ dwell time in the cache and further improve the buffer effectively.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm
Directory of Open Access Journals (Sweden)
Lingli Cui
2014-09-01
Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and
Coburn, Cynthia E.; Penuel, William R.; Geil, Kimberly E.
2015-01-01
The Carnegie Foundation for the Advancement of Teaching is a nonprofit, operating foundation with a long tradition of developing and studying ways to improve teaching practice. For the past three years, the Carnegie Foundation has initiated three different Networked Improvement Communities (NICs). The first, Quantway, is addressing the high…
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome
International Nuclear Information System (INIS)
Liu, Wei; Liu, Shutian; Liu, Zhengjun
2015-01-01
We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)
Energy Technology Data Exchange (ETDEWEB)
Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
International Nuclear Information System (INIS)
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-01-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...
Directory of Open Access Journals (Sweden)
Jiaying Du
2018-04-01
Full Text Available Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Montenegro Siguencia, Diana Fernanda
2013-01-01
Important issues such as NICs, the difference between IAS 11 and IAS 2 phases of a construction project, and construction costs are covered. According to the knowledge of the company and the different concepts and use them as a basis for management model for inventory control, which are the main procedures, process control, flow charts and forms to be used as as the activity of building the application of IAS 11. Se abarca temas importantes como las NICs, la diferencia entre la NIC 11 y la...
Wavelet based edge detection algorithm for web surface inspection of coated board web
Energy Technology Data Exchange (ETDEWEB)
Barjaktarovic, M; Petricevic, S, E-mail: slobodan@etf.bg.ac.r [School of Electrical Engineering, Bulevar Kralja Aleksandra 73, 11000 Belgrade (Serbia)
2010-07-15
This paper presents significant improvement of the already installed vision system. System was designed for real time coated board inspection. The improvement is achieved with development of a new algorithm for edge detection. The algorithm is based on the redundant (undecimated) wavelet transform. Compared to the existing algorithm better delineation of edges is achieved. This yields to better defect detection probability and more accurate geometrical classification, which will provide additional reduction of waste. Also, algorithm will provide detailed classification and more reliably tracking of defects. This improvement requires minimal changes in processing hardware, only a replacement of the graphic card would be needed, adding only negligibly to the system cost. Other changes are accomplished entirely in the image processing software.
Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph
2017-11-01
The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.
Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms
Directory of Open Access Journals (Sweden)
Fernando A. Auat Cheein
2013-01-01
Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.
A novel fluffy nanostructured 3D network of Ni(C7H4O5) for supercapacitors
International Nuclear Information System (INIS)
Chen, Qiulin; Lei, Shuijin; Chen, Lianfu; Deng, Peiqin; Xiao, Yanhe; Cheng, Baochang
2017-01-01
Highlights: • The fluffy 3D network of Ni(C 7 H 4 O 5 ) complex is firstly prepared on Ni foam. • The fluffy 3D network shows high areal capacitance and excellent cycle stability. • The fluffy network has large superior pseudocapacitive performance than the powder. • An asymmetric supercapacitor with high capacitance and energy density is assembled. - Abstract: Supercapacitors have raised considerable research interest in recent years due to their extensive potential application in next-generation energy storage. It is always of great importance to develop new electrode materials for supercapacitors so far. In this research, nickel gallate complex (Ni(C 7 H 4 O 5 )) nanostructures are successfully grown on nickel foam by a facile hydrothermal route, which can be directly used as the electrodes for supercapacitors. X-ray diffraction patterns show that the sample is amorphous. The scanning electron microscopy images reveal that the products consist of novel fluffy 3D network with a mass of fibers. The electrochemical measurements demonstrate that the prepared Ni(C 7 H 4 O 5 ) electrode possesses the specific capacitance of 3.688 F cm −2 (1229.3 F g −1 ) at a current density of 9 mA cm −2 (3 A g −1 ). It presents an excellent cycling stability with a capacitance retention of 87.9% after 5000 cycles even at a very high current density of 40 mA cm −2 . An asymmetric supercapacitor device is assembled using the Ni(C 7 H 4 O 5 ) sample as positive electrode and activated carbon as negative one. A high gravimetric capacitance of 71.4 F g −1 at a current density of 0.5 A g −1 can be achieved. The fabricated device delivers the highest energy density of 23.8 W h kg −1 at a power density of 388.2 W kg −1 with a voltage window of 1.55 V. This strategy should be extended to other organometallic compounds for supercapacitors.
An error reduction algorithm to improve lidar turbulence estimates for wind energy
Directory of Open Access Journals (Sweden)
J. F. Newman
2017-02-01
Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine
78 FR 23872 - HIPAA Privacy Rule and the National Instant Criminal Background Check System (NICS)
2013-04-23
..., gender, citizenship, race and ethnicity; and ``yes'' or ``no'' answers to questions about the person's... elements of an express permission, we would consider limiting the information to be disclosed to the... information to States on HIPAA Privacy Rule policies as they relate to NICS reporting? Are there central...
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Institute of Scientific and Technical Information of China (English)
徐宁; 章云; 周如旗
2013-01-01
Aiming at the difficulties of the form transferring on large datasets to get reducts,a same element conversion reduction algorithm based on discernibility matrix and discernibility function is put forward.It uses discernibility matrix to keep all classification information of data set,and discernibility function constructs the mathematical logic form from the classical information.The algorithm begins from lower rank of Conjunctive Normal Form(CNF) into Disjunctive Normal Form(DNF).According to the same element conversion algorithm and high element absorption algorithm,if higher ranks are absorbed,the algorithm can return; else the algorithm can enter itself to next circle.Calculation results show that this algorithm greatly reduces the once scale of transform,neatly uses the mature recursive algorithm and works compactly and effectively.%针对较大数据集在区分函数范式转换获得约简解集时的困难性,提出一种基于区分矩阵与区分函数的同元转换约简算法.利用区分矩阵保留数据集的全部分类信息,使用区分函数建立分类信息的数学逻辑范式,从低元的合取范式分步转换为析取范式,根据同元转换算法和高元吸收算法,若能够吸收完全则回退,否则再次调用算法进入转换运算.实例演算结果表明,该算法能缩小一次转换规模,灵活地运用递归算法,使得运算简洁有效.
Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A
2018-05-02
The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.
Ultrasound speckle reduction based on fractional order differentiation.
Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng
2017-07-01
Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.
Peak reduction for commercial buildings using energy storage
Chua, K. H.; Lim, Y. S.; Morris, S.
2017-11-01
Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.
Parallel Landscape Driven Data Reduction & Spatial Interpolation Algorithm for Big LiDAR Data
Directory of Open Access Journals (Sweden)
Rahil Sharma
2016-06-01
Full Text Available Airborne Light Detection and Ranging (LiDAR topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.
International Nuclear Information System (INIS)
Niknam, Taher; Azadfarsani, Ehsan; Jabbari, Masoud
2012-01-01
Highlights: ► Network reconfiguration is a very important way to save the electrical energy. ► This paper proposes a new algorithm to solve the DFR. ► The algorithm combines NFAPSO with NM. ► The proposed algorithm is tested on two distribution test feeders. - Abstract: Network reconfiguration for loss reduction in distribution system is a very important way to save the electrical energy. This paper proposes a new hybrid evolutionary algorithm to solve the Distribution Feeder Reconfiguration problem (DFR). The algorithm is based on combination of a New Fuzzy Adaptive Particle Swarm Optimization (NFAPSO) and Nelder–Mead simplex search method (NM) called NFAPSO–NM. In the proposed algorithm, a new fuzzy adaptive particle swarm optimization includes two parts. The first part is Fuzzy Adaptive Binary Particle Swarm Optimization (FABPSO) that determines the status of tie switches (open or close) and second part is Fuzzy Adaptive Discrete Particle Swarm Optimization (FADPSO) that determines the sectionalizing switch number. In other side, due to the results of binary PSO(BPSO) and discrete PSO(DPSO) algorithms highly depends on the values of their parameters such as the inertia weight and learning factors, a fuzzy system is employed to adaptively adjust the parameters during the search process. Moreover, the Nelder–Mead simplex search method is combined with the NFAPSO algorithm to improve its performance. Finally, the proposed algorithm is tested on two distribution test feeders. The results of simulation show that the proposed method is very powerful and guarantees to obtain the global optimization.
Normalization based K means Clustering Algorithm
Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika
2015-01-01
K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...
Survey of Object-Based Data Reduction Techniques in Observational Astronomy
Directory of Open Access Journals (Sweden)
Łukasik Szymon
2016-01-01
Full Text Available Dealing with astronomical observations represents one of the most challenging areas of big data analytics. Besides huge variety of data types, dynamics related to continuous data flow from multiple sources, handling enormous volumes of data is essential. This paper provides an overview of methods aimed at reducing both the number of features/attributes as well as data instances. It concentrates on data mining approaches not related to instruments and observation tools instead working on processed object-based data. The main goal of this article is to describe existing datasets on which algorithms are frequently tested, to characterize and classify available data reduction algorithms and identify promising solutions capable of addressing present and future challenges in astronomy.
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Manhart, Michael; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman
2018-01-01
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared RESULTS: Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (pprototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. • Metal objects cause artefacts in cone-beam computed tomography (CBCT) imaging. • These artefacts can be corrected by metal artefact reduction (MAR) algorithms. • Corrected images show significantly better visibility of nearby hepatic vessels and tissue. • Better visibility may facilitate image interpretation, save time and radiation exposure.
Impacto de la NIC 16 en pymes manufactureras: caso Cuenca-Ecuador
Illescas Sigcha, Mayra Jimena
2017-01-01
Se presenta los resultados obtenidos de la investigación a 172 empresas manufactureras de la ciudad de Cuenca de las que tanto solo el 90% de grandes empresas, el 63% de medianas empresas igualmente 39% de pequeñas empresas y el 6% de microempresas aplicaron la NIC 16 propiedad Planta y Equipo, mientras que las empresas que no aplicaron fue por el desconocimiento de la norma y por falta de recursos económicos. This presents the results obtained of research to 172 manufacturing companies of...
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest
Energy Technology Data Exchange (ETDEWEB)
Sofue, Keitaro; Sugimura, Kazuro [Kobe University Graduate School of Medicine, Department of Radiology, Kobe, Hyogo (Japan); Yoshikawa, Takeshi; Ohno, Yoshiharu [Kobe University Graduate School of Medicine, Advanced Biomedical Imaging Research Center, Kobe, Hyogo (Japan); Kobe University Graduate School of Medicine, Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe, Hyogo (Japan); Negi, Noriyuki [Kobe University Hospital, Division of Radiology, Kobe, Hyogo (Japan); Inokawa, Hiroyasu; Sugihara, Naoki [Toshiba Medical Systems Corporation, Otawara, Tochigi (Japan)
2017-07-15
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)
International Nuclear Information System (INIS)
Sofue, Keitaro; Sugimura, Kazuro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki
2017-01-01
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)
PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems
Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai
2017-09-01
In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.
Segment LLL Reduction of Lattice Bases Using Modular Arithmetic
Directory of Open Access Journals (Sweden)
Sanjay Mehrotra
2010-07-01
Full Text Available The algorithm of Lenstra, Lenstra, and Lovász (LLL transforms a given integer lattice basis into a reduced basis. Storjohann improved the worst case complexity of LLL algorithms by a factor of O(n using modular arithmetic. Koy and Schnorr developed a segment-LLL basis reduction algorithm that generates lattice basis satisfying a weaker condition than the LLL reduced basis with O(n improvement than the LLL algorithm. In this paper we combine Storjohann’s modular arithmetic approach with the segment-LLL approach to further improve the worst case complexity of the segment-LLL algorithms by a factor of n0.5.
Wavelet-LMS algorithm-based echo cancellers
Seetharaman, Lalith K.; Rao, Sathyanarayana S.
2002-12-01
This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).
A network-flow based valve-switching aware binding algorithm for flow-based microfluidic biochips
DEFF Research Database (Denmark)
Tseng, Kai-Han; You, Sheng-Chi; Minhass, Wajid Hassan
2013-01-01
-flow based resource binding algorithm based on breadth-first search (BFS) and minimum cost maximum flow (MCMF) in architectural-level synthesis. The experimental results show that our methodology not only makes significant reduction of valve-switching activities but also diminishes the application completion......Designs of flow-based microfluidic biochips are receiving much attention recently because they replace conventional biological automation paradigm and are able to integrate different biochemical analysis functions on a chip. However, as the design complexity increases, a flow-based microfluidic...... biochip needs more chip-integrated micro-valves, i.e., the basic unit of fluid-handling functionality, to manipulate the fluid flow for biochemical applications. Moreover, frequent switching of micro-valves results in decreased reliability. To minimize the valve-switching activities, we develop a network...
Decoding Hermitian Codes with Sudan's Algorithm
DEFF Research Database (Denmark)
Høholdt, Tom; Nielsen, Rasmus Refslund
1999-01-01
We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...
DEFF Research Database (Denmark)
Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Pou, Josep
2016-01-01
strings should be reduced during voltage sags. In this paper, an algorithm is proposed for determining the reference voltage of the PV string which results in a reduction of the output power to a certain amount. The proposed algorithm calculates the reference voltage for the dc/dc converter controller......, based on the characteristics of the power-voltage curve of the PV string and therefore, no modification is required in the the controller of the dc/dc converter. Simulation results on a 50-kW PV string verified the effectiveness of the proposed algorithm in reducing the power from PV strings under......Due to the high penetration of the installed distributed generation units in the power system, the injection of reactive power is required for the medium-scale and large-scale grid-connected photovoltaic power plants (PVPPs). Because of the current limitation of the grid-connected inverter...
M4GB : Efficient Groebner Basis algorithm
R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)
2017-01-01
textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction
An Offload NIC for NASA, NLR, and Grid Computing
Awrach, James
2013-01-01
This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems
Hybrid employment recommendation algorithm based on Spark
Li, Zuoquan; Lin, Yubei; Zhang, Xingming
2017-08-01
Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.
2012-06-04
..., development of statistical and assessment applications using standard html and java script, asp.net and Excel... the application procedures should be directed to Erika McDuffe, Program Specialist, NIC Jails Division... stay, average daily population); staffing levels, and; corrections activities and programs. It is not...
Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray
Directory of Open Access Journals (Sweden)
Lan Shu
2008-07-01
Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLEÃ¢Â€Â™s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.
Preparation of Ni-C Ultrafine Composite from Waste Material
Directory of Open Access Journals (Sweden)
Mahmoud A. Rabah
2017-06-01
Full Text Available This work depicts the preparation of Ni-C ultrafine composite from used engine oil. The used oil was emulsified with detergent loaded with Ni (OH2. The loaded emulsion was sprayed on electric plasma generated between two C electrodes to a DC main 28 V and 70-80 A. The purged Ni-doped carbon fume was trapped on a polymer film moistened with synthetic adhesive to fix the trapped smoke. Characterization of the deposit was made using SEM. XRD examined the crystal morphology. Carbon density in the cloud was calculated. The average size and thickness of the deposited composite is 120-160 nm. Aliphatic hydrocarbons readily decompose to gaseous products. Solid carbon smoke originates from aromatic compounds. Plasma heat blasts the oil in short time to decompose in one step.
Directory of Open Access Journals (Sweden)
Kursat Zuhtuogullari
2013-01-01
Full Text Available The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Energy Technology Data Exchange (ETDEWEB)
Di Youying [College of Chemistry and Chemical Engineering, Liaocheng University, Liaocheng 252059, Shandong (China)], E-mail: yydi@lcu.edu.cn; Hong Yuanping; Kong Yuxia; Yang Weiwei [College of Chemistry and Chemical Engineering, Liaocheng University, Liaocheng 252059, Shandong (China); Tan Zhicheng [Thermochemistry Laboratory, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China)
2009-01-15
A novel compound, viz. zinc nicotinate monohydrate, was synthesized by the method of room temperature solid phase synthesis. The techniques of FT-IR chemical and elemental analyses and X-ray powder diffraction were applied to characterise the structure and composition of the complex. In accordance with Hess' law, a thermochemical cycle was designed and the enthalpy change of the solid phase reaction of nicotinic acid with hydrated zinc acetate was determined to be {delta}{sub r}H{sub m}{sup 0}=(46.71{+-}0.21)kJ.mol{sup -1} by use of an isoperibol solution-reaction calorimeter. The standard molar enthalpy of formation of the title complex Zn(Nic){sub 2} . H{sub 2}O(s) was calculated as -(1062.3 {+-} 2.0) kJ . mol{sup -1} by use of the enthalpies of dissolution and other auxiliary thermodynamic data.
Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel
2016-05-01
The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose
Cifuentes Garzón, Juan Andrés
2012-01-01
Un problema de interpretar la información financiera es la diversidad de normas contables y poder emitir un criterio que sea entendible, comparable, etc; para otros usuarios de la información y que los resultados obtenidos no pierdan credibilidad. Las IASB (Internacional Accounting Comité Foundation), para poner fin a este y varios problemas más y aumentar la transparencia de la información se comprometen a revisar las NIC (Normas internacionales de Contabilidad). Más delante de acuerdo a ...
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.
Directory of Open Access Journals (Sweden)
Felix Fritzen
2018-02-01
Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.
Dong, S.
2018-05-01
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
International Nuclear Information System (INIS)
Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang
2014-01-01
Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the
Efficient algorithms of multidimensional γ-ray spectra compression
International Nuclear Information System (INIS)
Morhac, M.; Matousek, V.
2006-01-01
The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra
100 anys després, el desastre del Titànic troba nous culpables
Marco Soler, Enric
2012-01-01
100 anys després de l'enfonsament del Titànic, l'article revisa, des d'un punt de vista astronòmic, la influència de l'alineació lunisolar de gener de 1912 sobre el desastre marítim. 100 years after the sinking of the Titanic, the article reviews, from an astronomical point of view, the influence of the lunisolar alignment of January 1912 on the maritime disaster.
Energy Technology Data Exchange (ETDEWEB)
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)
2014-04-15
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
International Nuclear Information System (INIS)
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo
2014-01-01
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
Mindfulness-Based Stress Reduction
... R S T U V W X Y Z Mindfulness-Based Stress Reduction (MBSR) Information 6 Things You ... Disease and Dementia (12/20/13) Research Spotlights Mindfulness-Based Stress Reduction, Cognitive-Behavioral Therapy Shown To ...
Robust MST-Based Clustering Algorithm.
Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing
2018-06-01
Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.
Network reconfiguration for loss reduction in electrical distribution system using genetic algorithm
International Nuclear Information System (INIS)
Adail, A.S.A.A.
2012-01-01
Distribution system is a critical links between the utility and the nuclear installation. During feeding electricity to that installation there are power losses. The quality of the network depends on the reduction of these losses. Distribution system which feeds the nuclear installation must have a higher quality power. For example, in Inshas site, electrical power is supplied to it from two incoming feeders (one from new abu-zabal substation and the other from old abu-zabal substation). Each feeder is designed to carry the full load, while the operator preferred to connect with a new abu-zabal substation, which has a good power quality. Bad power quality affects directly the nuclear reactor and has a negative impact on the installed sensitive equipment's of the operation. The thesis is Studying the electrical losses in a distribution system (causes and effected factors), feeder reconfiguration methods, and applying of genetic algorithm in an electric distribution power system. In the end, this study proposes an optimization technique based on genetic algorithms for distribution network reconfiguration to reduce the network losses to minimum. The proposed method is applied to IEEE test network; that contain 3 feeders and 16 nodes. The technique is applied through two groups, distribution have general loads, and nuclear loads. In the groups the technique applied to seven cases at normal operation state, system fault condition as well as different loads conditions. Simulated results are drawn to show the accuracy of the technique.
International Nuclear Information System (INIS)
Vaegler, Sven
2016-01-01
The treatment of cancer in radiation therapy is achievable today by techniques that enable highly conformal dose distributions and steep dose gradients. In order to avoid mistreatment, these irradiation techniques have necessitated enhanced patient localization techniques. With an integrated x-ray tube at modern linear accelerators kV-projections can be acquired over a sufficiently large angular space and can be reconstructed to a volumetric image data set from the current situation of the patient prior to irradiation. The so-called Cone-Beam-CT (CBCT) allows a precise verification of patient positioning as well as adaptive radiotherapy. The benefits of an improved patient positioning due to a daily performed CBCT's is contrary to an increased and not negligible radiation exposure of the patient. In order to decrease the radiation exposure, substantial research effort is focused on various dose reduction strategies. Prominent strategies are the decrease of the charge per projection, the reduction of the number of projections as well as the reduction of the acquisition space. Unfortunately, these acquisition schemes lead to images with degraded quality with the widely used Feldkamp-Davis-Kress image reconstruction algorithm. More sophisticated image reconstruction techniques can deal with these dose-reduction strategies without degrading the image quality. A frequently investigated method is the image reconstruction by minimizing the total variation (TV), which is also known as Compressed Sensing (CS). A Compressed Sensing-based reconstruction framework that includes prior images into the reconstruction algorithm is the Prior-Image-Constrained- Compressed-Sensing algorithm (PICCS). The images reconstructed by PICCS outperform the reconstruction results of the conventional Feldkamp-Davis-Kress algorithm (FDK) based method if only a small number of projections are available. However, a drawback of PICCS is that major deviations between prior image data sets and the
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2014-09-22
We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.
Reduction of image-based ADI-to-AEI overlay inconsistency with improved algorithm
Chen, Yen-Liang; Lin, Shu-Hong; Chen, Kai-Hsiung; Ke, Chih-Ming; Gau, Tsai-Sheng
2013-04-01
In image-based overlay (IBO) measurement, the measurement quality of various measurement spectra can be judged by quality indicators and also the ADI-to-AEI similarity to determine the optimum light spectrum. However we found some IBO measured results showing erroneous indication of wafer expansion from the difference between the ADI and the AEI maps, even after their measurement spectra were optimized. To reduce this inconsistency, an improved image calculation algorithm is proposed in this paper. Different gray levels composed of inner- and outer-box contours are extracted to calculate their ADI overlay errors. The symmetry of intensity distribution at the thresholds dictated by a range of gray levels is used to determine the particular gray level that can minimize the ADI-to-AEI overlay inconsistency. After this improvement, the ADI is more similar to AEI with less expansion difference. The same wafer was also checked by the diffraction-based overlay (DBO) tool to verify that there is no physical wafer expansion. When there is actual wafer expansion induced by large internal stress, both the IBO and the DBO measurements indicate similar expansion results. The scanning white-light interference microscope was used to check the variation of wafer warpage during the ADI and AEI stages. It predicts a similar trend with the overlay difference map, confirming the internal stress.
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network
Directory of Open Access Journals (Sweden)
C. Vimalarani
2016-01-01
Full Text Available Wireless Sensor Network (WSN is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.
Suroso, Arif Imam; Siahaan, Rotua
2006-01-01
This study was aimed at finding out the existence of stress in work towards employees' performance, and knowing the indicators of each shaper element of that influencing the employees' performance of plant department of agribusiness industry at PT. NIC. The method of this study is case study involving 155 respondents. Using the Structural Equation Modelling (SEM), it is known that the influence of stress in work towards employees' performance is significantly negative. It means that the advan...
Energy Technology Data Exchange (ETDEWEB)
Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering
2017-10-04
Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO_{2}, NO_{x}, VOC, PM_{10}) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.
Extracción de cobre desde soluciones clorhídricas con LIX 860N-IC y LIX 84-IC
Directory of Open Access Journals (Sweden)
Navarro, Carlos María
2001-08-01
Full Text Available In this work, the extraction of copper from chloride solutions with two hydroxyoximes: 5- nonylsalicylaldoxime (LIX 860N-IC and 2-hydroxy-5-nonylacetophenona oxime (LIX 84-IC is discussed. The results showed that an increase in the acidity and an increase in the total concentration of chloride ions in the aqueous phase decreased significantly the extraction of copper as well as the extraction of iron for both extractants. This effect of the chloride ions can be explained by the formation of a series of chloro complexes of Cu(II and Fe(III in the aqueous phase. The effect of initial pH and total chloride concentration on the extraction of chloride by the organic phase suggests that LIX 860N-IC, and to a lesser extent LIX 84-IC, extract small amounts of the cationic complex, CuCl^{+}. An increase in the concentration of chloride ions also produced a small decrease in the rate of copper extraction with both hydroxyoximes.
En este trabajo se discute el estudio de la extracción de cobre desde soluciones clorhídricas con dos hidroxioximas: 5-nonilsalicilaldoxima (LIX 860N-IC, y 2-hidroxi-5 nonilacetofenona oxima (LIX 84-IC. Los resultados indicaron que al aumentar la acidez o aumentar la concentración de cloruro en la fase acuosa se produce una significativa disminución en la extracción de cobre y hierro con ambas hidroxioximas. Este efecto del ion cloruro se explica por la formación de varios clorocomplejos de Cu(II y Fe(III en la solución acuosa. El efecto del pH y la concentración total de cloruro en la extracción de cloruro sugiere que el LIX 860N-IC, y en menor grado el LIX 84-IC extraen pequeñas cantidades del catión monovalente, CuCl^{+}. Se determinó también que un aumento en la concentración de cloruro en la solución acuosa produce una leve disminución en la velocidad de extracción del cobre con ambas hidroxioximas.
Directory of Open Access Journals (Sweden)
Huda M. Hassan Hilal
2015-06-01
Full Text Available The topic of female leadership has yet to be conclusively and impartially investigated, especially from the Islamic perspective. The current study bridges the gap between the original Qur’ānic teachings and dominant Muslim culture by highlighting the Qur’ānic conceptualisation of female leadership and investigates the myth that only men are the best leaders. It identifies female leadership qualities of Queen Āsiyah, Queen Balqīs and Maryam, the daughter of ‘Imrān and mother of Prophet ‘Īsā, and matches them with male leadership qualities of Prophet Muhammad, Dhū al-Qarnayn, Țālūt, and Prophet Sulaymān as narrated in the Qur’ān. The research documents the traits of a leader’s personality, leader-follower relation, task structure, and crisis management as four principal axes to the study, relying on the dominant theories of leadership. The inference reveals conformity between both male and female patterns of leadership, except for minor differences related to physical strength, and war conducts.
HPT-Deformation of Copper and NicKEXl Single Crystals
International Nuclear Information System (INIS)
Hafok, M.; Vorhauer, A.; Pippan, R.; KEXcKEXs, J.
2005-01-01
Full text: Copper and nicKEXl single crystals of high purity with a crystallographic orientation, (001) and (111) respectively, were deformed by applying high pressure torsion (HPT) at room temperature. Special interest was devoted to the structural evolution of the material, which was characterized by electron backscatter diffraction (EBSD) and x-ray texture analysis as well. In addition back scatter electron investigations were applied to characterize shape and size of the new formed structure. Furthermore the study is focused on the micro structural and micro textural evolution that lead to the increase of misorientation angle with increasing plastic deformation. We observed an increasing fragmentation of the structure with increasing plastic equivalent strain up to a level where the grain size is saturated. The saturation could be traced back to dynamical recovery and recrystallisation during the deformation process that is depending on the purity of the material. (author)
Seizure detection algorithms based on EMG signals
DEFF Research Database (Denmark)
Conradsen, Isa
Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....
Directory of Open Access Journals (Sweden)
Weitao Li
2017-01-01
Full Text Available During neurosurgery, an optical probe has been used to guide the micro-electrode, which is punctured into the globus pallidus (GP to create a lesion that can relieve the cardinal symptoms. Accurate target localization is the key factor to affect the treatment. However, considering the scattering nature of the tissue, the “look ahead distance (LAD” of optical probe makes the boundary between the different tissues blurred and difficult to be distinguished, which is defined as artifact. Thus, it is highly desirable to reduce the artifact caused by LAD. In this paper, a real-time algorithm based on precise threshold was proposed to eliminate the artifact. The value of the threshold was determined by the maximum error of the measurement system during the calibration procession automatically. Then, the measured data was processed sequentially only based on the threshold and the former data. Moreover, 100μm double-fiber probe and two-layer and multi-layer phantom models were utilized to validate the precision of the algorithm. The error of the algorithm is one puncture step, which was proved in the theory and experiment. It was concluded that the present method could reduce the artifact caused by LAD and make the real boundary sharper and less blurred in real-time. It might be potentially used for the neurosurgery navigation.
Liang, Lihua; Yuan, Jia; Zhang, Songtao; Zhao, Peng
2018-01-01
This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller.
Dawn-dusk asymmetries and sub-Alfvénic flow in the high and low latitude magnetosheath
Directory of Open Access Journals (Sweden)
M. Longmore
2005-11-01
Full Text Available We present the results of a statistical survey of the magnetosheath using four years of Cluster orbital coverage. Moments of the plasma distribution obtained from the electron and ion instruments together with magnetic field data are used to characterise the flow and density in the magnetosheath. We note two important differences between our survey and the gasdynamic model predictions: a deceleration of the flow at higher latitudes close to the magnetopause, resulting in sub-Alfvénic flow near the cusp, and a dawn-dusk asymmetry with higher velocity magnitudes and lower densities measured on the dusk side of the magnetosheath in the Northern Hemisphere. The latter observation is in agreement with studies carried out by Paularena et al. (2001, Němeček et al. (2000, and Šafránková et al. (2004. In equations of hydrodynamics for a single-component additon to this we observe a reverse of this asymmetry for the Southern Hemisphere. High-latitude sub-Alfvénic flow is thought to be a necessary condition for steady state reconnection pole-ward of the cusp.
Celma Vicente, Matilde; Ajuria-Imaz, Eloisa; Lopez-Morales, Manuel; Fernandez-Marín, Pilar; Menor-Castro, Alicia; Cano-Caballero Galvez, Maria Dolores
2015-01-01
This paper shows the utility of a NIC standardized language to assess the extent of nursing student skills at Practicum in surgical units To identify the nursing interventions classification (NIC) that students can learn to perform in surgical units. To determine the level of difficulty in learning interventions, depending on which week of rotation in clinical placement the student is. Qualitative study using Delphi consensus technique, involving nurses with teaching experience who work in hospital surgical units, where students undertake the Practicum. The results were triangulated through a questionnaire to tutors about the degree of conformity. A consensus was reached about the interventions that students can achieve in surgical units and the frequency in which they can be performed. The level of difficulty of each intervention, and the amount of weeks of practice that students need to reach the expected level of competence was also determined. The results should enable us to design better rotations matched to student needs. Knowing the frequency of each intervention that is performed in each unit determines the chances of learning it, as well as the indicators for its assessment. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm
Wang, Zhen; Kudo, Hiroyuki
2012-03-01
This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.
Verification-Based Interval-Passing Algorithm for Compressed Sensing
Wu, Xiaofu; Yang, Zhen
2013-01-01
We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...
Gradient Evolution-based Support Vector Machine Algorithm for Classification
Zulvia, Ferani E.; Kuo, R. J.
2018-03-01
This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.
A Trust-region-based Sequential Quadratic Programming Algorithm
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
IMPLEMENTATION OF INCIDENT DETECTION ALGORITHM BASED ON FUZZY LOGIC IN PTV VISSIM
Directory of Open Access Journals (Sweden)
Andrey Borisovich Nikolaev
2017-05-01
Full Text Available Traffic incident management is a major challenge in the management of movement, requiring constant attention and significant investment, as well as fast and accurate solutions in order to re-establish normal traffic conditions. Automatic control methods are becoming an important factor for the reduction of traffic congestion caused by an arising incident. In this paper, the algorithm of automatic detection incident based on fuzzy logic is implemented in the software PTV VISSIM. 9 different types of tests were conducted on the two lane road section segment with changing traffic conditions: the location of the road accident, loading of traffic. The main conclusion of the research is that the proposed algorithm for the incidents detection demonstrates good performance in the time of detection and false alarms
International Nuclear Information System (INIS)
Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo
2004-01-01
This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain
LLNL Contribution to LLE FY09 Annual Report: NIC and HED Results
International Nuclear Information System (INIS)
Heeter, R.F.; Landen, O.L.; Hsing, W.W.; Fournier, K.B.
2009-01-01
In FY09, LLNL led 238 target shots on the OMEGA Laser System. Approximately half of these LLNL-led shots supported the National Ignition Campaign (NIC). The remainder was dedicated to experiments for the high-energy-density stewardship experiments (HEDSE). Objectives of the LLNL led NIC campaigns at OMEGA included: (1) Laser-plasma interaction studies in physical conditions relevant for the NIF ignition targets; (2) Demonstration of Tr = 100 eV foot symmetry tuning using a reemission sphere; (3) X-ray scattering in support of conductivity measurements of solid density Be plasmas; (4) Experiments to study the physical properties (thermal conductivity) of shocked fusion fuels; (5) High-resolution measurements of velocity nonuniformities created by microscopic perturbations in NIF ablator materials; (6) Development of a novel Compton Radiography diagnostic platform for ICF experiments; and (7) Precision validation of the equation of state for quartz. The LLNL HEDSE campaigns included the following experiments: (1) Quasi-isentropic (ICE) drive used to study material properties such as strength, equation of state, phase, and phase-transition kinetics under high pressure; (2) Development of a high-energy backlighter for radiography in support of material strength experiments using Omega EP and the joint OMEGA-OMEGA-EP configuration; (3) Debris characterization from long-duration, point-apertured, point-projection x-ray backlighters for NIF radiation transport experiments; (4) Demonstration of ultrafast temperature and density measurements with x-ray Thomson scattering from short-pulse laser-heated matter; (5) The development of an experimental platform to study nonlocal thermodynamic equilibrium (NLTE) physics using direct-drive implosions; (6) Opacity studies of high-temperature plasmas under LTE conditions; and (7) Characterization of copper (Cu) foams for HEDSE experiments.
REDVET en el Centro de Información de la Red de Agricultura de EE.UU (AgNIC)
REDVET
2010-01-01
ResumenAgNIC http://www.agnic.org/ es el Centro de Información de Red de Agricultura de EE.UU, una alianza de instituciones que trabajan paraofrecer un acceso rápido y fiable a fuentes de información agraria de calidad basada en concepto de "centros de excelencia".
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Design of SVC Controller Based on Improved Biogeography-Based Optimization Algorithm
Directory of Open Access Journals (Sweden)
Feifei Dong
2014-01-01
Full Text Available Considering that common subsynchronous resonance controllers cannot adapt to the characteristics of the time-varying and nonlinear behavior of a power system, the cosine migration model, the improved migration operator, and the mutative scale of chaos and Cauchy mutation strategy are introduced into an improved biogeography-based optimization (IBBO algorithm in order to design an optimal subsynchronous damping controller based on the mechanism of suppressing SSR by static var compensator (SVC. The effectiveness of the improved controller is verified by eigenvalue analysis and electromagnetic simulations. The simulation results of Jinjie plant indicate that the subsynchronous damping controller optimized by the IBBO algorithm can remarkably improve the damping of torsional modes and thus effectively depress SSR, and ensure the safety and stability of units and power grid operation. Moreover, the IBBO algorithm has the merits of a faster searching speed and higher searching accuracy in seeking the optimal control parameters over traditional algorithms, such as BBO algorithm, PSO algorithm, and GA algorithm.
Simple sorting algorithm test based on CUDA
Meng, Hongyu; Guo, Fangjin
2015-01-01
With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.
SIFT based algorithm for point feature tracking
Directory of Open Access Journals (Sweden)
Adrian BURLACU
2007-12-01
Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.
Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm
Directory of Open Access Journals (Sweden)
Jianyong Liu
2015-01-01
Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.
Measuring the Alfvénic nature of the interstellar medium: Velocity anisotropy revisited
International Nuclear Information System (INIS)
Burkhart, Blakesley; Lazarian, A.; Leão, I. C.; De Medeiros, J. R.; Esquivel, A.
2014-01-01
The dynamics of the interstellar medium (ISM) are strongly affected by turbulence, which shows increased anisotropy in the presence of a magnetic field. We expand upon the Esquivel and Lazarian method to estimate the Alfvén Mach number using the structure function anisotropy in velocity centroid data from Position-Position-Velocity maps. We utilize three-dimensional magnetohydrodynamic simulations of fully developed turbulence, with a large range of sonic and Alfvénic Mach numbers, to produce synthetic observations of velocity centroids with observational characteristics such as thermal broadening, cloud boundaries, noise, and radiative transfer effects of carbon monoxide. In addition, we investigate how the resulting anisotropy-Alfvén Mach number dependency found in Esquivel and Lazarian might change when taking the second moment of the Position-Position-Velocity cube or when using different expressions to calculate the velocity centroids. We find that the degree of anisotropy is related primarily to the magnetic field strength (i.e., Alfvén Mach number) and the line-of-sight orientation, with a secondary effect on sonic Mach number. If the line of sight is parallel to up to ≈45 deg off of the mean field direction, the velocity centroid anisotropy is not prominent enough to distinguish different Alfvénic regimes. The observed anisotropy is not strongly affected by including radiative transfer, although future studies should include additional tests for opacity effects. These results open up the possibility of studying the magnetic nature of the ISM using statistical methods in addition to existing observational techniques.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K
2017-12-19
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Energy Technology Data Exchange (ETDEWEB)
Ngonkham, S. [Khonkaen Univ., Amphur Muang (Thailand). Dept. of Electrical Engineering; Buasri, P. [Khonkaen Univ., Amphur Muang (Thailand). Embed System Research Group
2009-03-11
A harmony search (HS) algorithm was used to optimize economic dispatch (ED) in a wind energy conversion system (WECS) for power system integration. The HS algorithm was based on a stochastic random search method. System costs for the WECS system were estimated in relation to average wind speeds. The HS algorithm was implemented to optimize the ED with a simple programming procedure. The study showed that the initial parameters must be carefully selected to ensure the accuracy of the HS algorithm. The algorithm demonstrated that total costs of the WECS system were higher than costs associated with energy efficiency procedures that reduced the same amount of greenhouse gas (GHG) emissions. 7 refs,. 10 tabs., 16 figs.
Cai, Jia; Tang, Yi
2018-02-01
Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Vaitheeswaran, Ranganathan; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram
2011-01-01
The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) (∼ 2% - 5% improvement) and Homogeneity Index (HI) (∼ 4% - 10% improvement) as compared to GEM and FSA algorithms (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Structure-Based Algorithms for Microvessel Classification
Smith, Amy F.
2015-02-01
© 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.
International Nuclear Information System (INIS)
Munir, I.; Hussan, W.; Kazi, M.; Mian, A.
2016-01-01
Brassica juncea is an important oil seed crop throughout the world. The demand and cultivation of oil seed crops has gained importance due to rapid increase in world population and industrialization. Fungal diseases pose a great threat to Brassica productivity worldwide. Absence of resistance genes against fungal infection within crossable germplasms of this crop necessitates deployment of genetic engineering approaches to produce transgenic plants with resistance against fungal infections. In the current study, hypocotyls and cotyledons of Brassica juncea, used as explants, were transformed with Agrobacterium tumefacien strain EHA101 harboring binary vector pEKB/NIC containing synthetic chitinase gene (NIC), an antifungal gene under the control of cauliflower mosaic virus promoter (CaMV35S). Bar genes and nptII gene were used as selectable markers. Presence of chitinase gene in trangenic lines was confirmed by PCR and southern blotting analysis. Effect of the extracted proteins from non-transgenic and transgenic lines was observed on the growth of Alternaria brassicicola, a common disease causing pathogen in brassica crop. In comparison to non-transgenic control lines, the leaf tissue extracts of the transgenic lines showed considerable resistance and antifungal activity against A. brassicicola. The antifungal activity in transgenic lines was observed as corresponding to the transgene copy number. (author)
Novel prediction- and subblock-based algorithm for fractal image compression
International Nuclear Information System (INIS)
Chung, K.-L.; Hsu, C.-H.
2006-01-01
Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 m
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
International Nuclear Information System (INIS)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng; Fung, Russell; Zhu Chun; Miao Jianwei; Mao Yu; Khatonabadi, Maryam; DeMarco, John J.; McNitt-Gray, Michael F.; Osher, Stanley J.
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest
MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming
Directory of Open Access Journals (Sweden)
Yuteng Xiao
2017-01-01
Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.
A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution
Directory of Open Access Journals (Sweden)
LIU Wanke
2016-02-01
Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.
New Search Space Reduction Algorithm for Vertical Reference Trajectory Optimization
Directory of Open Access Journals (Sweden)
Alejandro MURRIETA-MENDOZA
2016-06-01
Full Text Available Burning the fuel required to sustain a given flight releases pollution such as carbon dioxide and nitrogen oxides, and the amount of fuel consumed is also a significant expense for airlines. It is desirable to reduce fuel consumption to reduce both pollution and flight costs. To increase fuel savings in a given flight, one option is to compute the most economical vertical reference trajectory (or flight plan. A deterministic algorithm was developed using a numerical aircraft performance model to determine the most economical vertical flight profile considering take-off weight, flight distance, step climb and weather conditions. This algorithm is based on linear interpolations of the performance model using the Lagrange interpolation method. The algorithm downloads the latest available forecast from Environment Canada according to the departure date and flight coordinates, and calculates the optimal trajectory taking into account the effects of wind and temperature. Techniques to avoid unnecessary calculations are implemented to reduce the computation time. The costs of the reference trajectories proposed by the algorithm are compared with the costs of the reference trajectories proposed by a commercial flight management system using the fuel consumption estimated by the FlightSim® simulator made by Presagis®.
Multichannel transfer function with dimensionality reduction
Kim, Han Suk
2010-01-17
The design of transfer functions for volume rendering is a difficult task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel. In this paper, we propose a new method for transfer function design. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. The high-dimensional data of the domain is reduced by applying recently developed nonlinear dimensionality reduction algorithms. In this paper, we used Isomap as well as a traditional algorithm, Principle Component Analysis (PCA). Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. In this publication we report on the impact of the dimensionality reduction algorithms on transfer function design for confocal microscopy data.
An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction
International Nuclear Information System (INIS)
Mundy, Daniel W.; Herman, Michael G.
2011-01-01
parallel to the image plane. This effect decreases the sum of the image, thereby also affecting the mean, standard deviation, and SNR of the image. All back-projected events associated with a simulated point source intersected the voxel containing the source and the FWHM of the back-projected image was similar to that obtained from the marching method. Conclusions: The slight deficit to image quality observed with the threshold-based back-projection algorithm described here is outweighed by the 75% reduction in computation time. The implementation of this method requires the development of an optimum threshold function, which determines the overall accuracy of the method. This makes the algorithm well-suited to applications involving the reconstruction of many large images, where the time invested in threshold development is offset by the decreased image reconstruction time. Implemented in a parallel-computing environment, the threshold-based algorithm has the potential to provide real-time dose verification for radiation therapy.
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity
Directory of Open Access Journals (Sweden)
Xue Shan
2015-01-01
Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
Detection of honeycomb cell walls from measurement data based on Harris corner detection algorithm
Qin, Yan; Dong, Zhigang; Kang, Renke; Yang, Jie; Ayinde, Babajide O.
2018-06-01
A honeycomb core is a discontinuous material with a thin-wall structure—a characteristic that makes accurate surface measurement difficult. This paper presents a cell wall detection method based on the Harris corner detection algorithm using laser measurement data. The vertexes of honeycomb cores are recognized with two different methods: one method is the reduction of data density, and the other is the optimization of the threshold of the Harris corner detection algorithm. Each cell wall is then identified in accordance with the neighboring relationships of its vertexes. Experiments were carried out for different types and surface shapes of honeycomb cores, where the proposed method was proved effective in dealing with noise due to burrs and/or deformation of cell walls.
McKinney, Mark C; Riley, Jeffrey B
2007-12-01
The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.
A range-based predictive localization algorithm for WSID networks
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
DE and NLP Based QPLS Algorithm
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
A difference tracking algorithm based on discrete sine transform
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.
Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed
2018-03-01
Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Energy Technology Data Exchange (ETDEWEB)
Chen, Che-Yu; King, Patrick K.; Li, Zhi-Yun [Department of Astronomy, University of Virginia, Charlottesville, VA 22901 (United States)
2016-10-01
Diffuse striations in molecular clouds are preferentially aligned with local magnetic fields, whereas dense filaments tend to be perpendicular to them. When and why this transition occurs remain uncertain. To explore the physics behind this transition, we compute the histogram of relative orientation (HRO) between the density gradient and the magnetic field in three-dimensional magnetohydrodynamic (MHD) simulations of prestellar core formation in shock-compressed regions within giant molecular clouds. We find that, in the magnetically dominated (sub-Alfvénic) post-shock region, the gas structure is preferentially aligned with the local magnetic field. For overdense sub-regions with super-Alfvénic gas, their elongation becomes preferentially perpendicular to the local magnetic field. The transition occurs when self-gravitating gas gains enough kinetic energy from the gravitational acceleration to overcome the magnetic support against the cross-field contraction, which results in a power-law increase of the field strength with density. Similar results can be drawn from HROs in projected two-dimensional maps with integrated column densities and synthetic polarized dust emission. We quantitatively analyze our simulated polarization properties, and interpret the reduced polarization fraction at high column densities as the result of increased distortion of magnetic field directions in trans- or super-Alfvénic gas. Furthermore, we introduce measures of the inclination and tangledness of the magnetic field along the line of sight as the controlling factors of the polarization fraction. Observations of the polarization fraction and angle dispersion can therefore be utilized in studying local magnetic field morphology in star-forming regions.
El llibre electrònic cientificotècnic en el context espanyol
Directory of Open Access Journals (Sweden)
Romero Otero, Irene Sofía
2015-12-01
Full Text Available Aquest treball descriu alguns dels aspectes del llibre electrònic que més debat generen, especialment el cientificotècnic, a Espanya i per a això el document aborda el tema de l'IVA, els preus proposats per les editorials i la necessitat d'introduir-se a nous mercats de lectors mitjançant diferents models de negocis. A més, esmenta algunes conclusions dels avantatges que posseeix aquest recurs, que, per les seves àmplies possibilitats intrínseques en matèria de servei intangible d'un nou tipus de text, desenvolupa facilitats per a la distribució i difusió mundial. D'aquesta manera, el llibre electrònic ha permès a les editorials adaptar-se a les necessitats reals dels seus clients i als processos actuals d'adquisició de monografies. Per tot l'anterior, aquest article pretén mostrar un panorama general de com s'està comportant el llibre electrònic en el context espanyol.Este trabajo describe algunos de los aspectos más debatibles que está generando el libro electrónico, especialmente el científico-técnico, en España y para ello el documento aborda el tema del IVA, los precios propuestos por las editoriales y la necesidad de ingresar a nuevos mercados de lectores mediante distintos modelos de negocios. Además, menciona algunas conclusiones de las ventajas que posee dicho recurso, que, por sus amplias posibilidades intrínsecas en materia de servicio intangible de un nuevo tipo de texto, desarrolla facilidades para la distribución y difusión mundial. De esta forma el libro electrónico ha permitido a las editoriales adaptarse a las necesidades reales de sus clientes y a los procesos actuales de adquisición de monografías. Por lo anterior, este artículo pretende mostrar un panorama general de cómo se está comportando el libro electrónico en el contexto español.This paper describes some of the most controversial areas of debate on the subject of the scientific and technical e-book market in Spain: VAT regulations
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Shi-hua Zhan
2016-01-01
Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Research on personalized recommendation algorithm based on spark
Li, Zeng; Liu, Yu
2018-04-01
With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.
Fast image matching algorithm based on projection characteristics
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm
Yumin, Dong; Li, Zhao
2014-01-01
Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...
Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm
Directory of Open Access Journals (Sweden)
KeKe Gen
2015-01-01
Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and
Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm
Directory of Open Access Journals (Sweden)
Peng Li
2016-01-01
Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.
A Flocking Based algorithm for Document Clustering Analysis
Energy Technology Data Exchange (ETDEWEB)
Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL
2006-01-01
Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
Q-learning-based adjustable fixed-phase quantum Grover search algorithm
International Nuclear Information System (INIS)
Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun
2017-01-01
We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)
An assembly sequence planning method based on composite algorithm
Directory of Open Access Journals (Sweden)
Enfu LIU
2016-02-01
Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.
International Nuclear Information System (INIS)
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman; Manhart, Michael
2018-01-01
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman [University Hospital Zurich, Department of Radiology, Zurich (Switzerland); Manhart, Michael [Imaging Concepts, HC AT IN IMC, Siemens Healthcare GmbH, Advanced Therapies, Innovation, Forchheim (Germany)
2018-01-15
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)
Multi-objective mixture-based iterated density estimation evolutionary algorithms
Thierens, D.; Bosman, P.A.N.
2001-01-01
We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Graphene-based tunable non-foster circuit for VHF applications
Energy Technology Data Exchange (ETDEWEB)
Tian, Jing; Nagarkoti, Deepak Singh; Rajab, Khalid Z.; Hao, Yang, E-mail: y.hao@qmul.ac.uk [School of Electronic Engineering and Computer Science, Queen Mary, University of London, London, E1 4NS (United Kingdom)
2016-06-15
This paper presents a negative impedance converter (NIC) based on graphene field effect transistors (GFETs) for VHF applications. The NIC is designed following Linvill’s open circuit stable (OCS) topology. The DC modelling parameters of GFET are extracted from a device measured by Meric et al. [IEEE Electron Devices Meeting, 23.2.1 (2010)] Estimated parasitics are also taken into account. Simulation results from Keysight Advanced Design System (ADS) show good NIC performance up to 200 MHz and the value of negative capacitance is directly proportional to the capacitive load. In addition, it has been shown that by varying the supply voltage the value of negative capacitance can also be tuned. The NIC stability has been tested up to 2 GHz (10 times the maximum operation frequency) using Nyquist stability criterion to ensure there are no oscillation issues.
Graphene-based tunable non-foster circuit for VHF applications
Directory of Open Access Journals (Sweden)
Jing Tian
2016-06-01
Full Text Available This paper presents a negative impedance converter (NIC based on graphene field effect transistors (GFETs for VHF applications. The NIC is designed following Linvill’s open circuit stable (OCS topology. The DC modelling parameters of GFET are extracted from a device measured by Meric et al. [IEEE Electron Devices Meeting, 23.2.1 (2010] Estimated parasitics are also taken into account. Simulation results from Keysight Advanced Design System (ADS show good NIC performance up to 200 MHz and the value of negative capacitance is directly proportional to the capacitive load. In addition, it has been shown that by varying the supply voltage the value of negative capacitance can also be tuned. The NIC stability has been tested up to 2 GHz (10 times the maximum operation frequency using Nyquist stability criterion to ensure there are no oscillation issues.
A Novel Quad Harmony Search Algorithm for Grid-Based Path Finding
Directory of Open Access Journals (Sweden)
Saso Koceski
2014-09-01
Full Text Available A novel approach to the problem of grid-based path finding has been introduced. The method is a block-based search algorithm, founded on the bases of two algorithms, namely the quad-tree algorithm, which offered a great opportunity for decreasing the time needed to compute the solution, and the harmony search (HS algorithm, a meta-heuristic algorithm used to obtain the optimal solution. This quad HS algorithm uses the quad-tree decomposition of free space in the grid to mark the free areas and treat them as a single node, which greatly improves the execution. The results of the quad HS algorithm have been compared to other meta-heuristic algorithms, i.e., ant colony, genetic algorithm, particle swarm optimization and simulated annealing, and it was proved to obtain the best results in terms of time and giving the optimal path.
Text Clustering Algorithm Based on Random Cluster Core
Directory of Open Access Journals (Sweden)
Huang Long-Jun
2016-01-01
Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.
Agent-based Algorithm for Spatial Distribution of Objects
Collier, Nathan
2012-06-02
In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.
Wavelet Adaptive Algorithm and Its Application to MRE Noise Control System
Directory of Open Access Journals (Sweden)
Zhang Yulin
2015-01-01
Full Text Available To address the limitation of conventional adaptive algorithm used for active noise control (ANC system, this paper proposed and studied two adaptive algorithms based on Wavelet. The twos are applied to a noise control system including magnetorheological elastomers (MRE, which is a smart viscoelastic material characterized by a complex modulus dependent on vibration frequency and controllable by external magnetic fields. Simulation results reveal that the Decomposition LMS algorithm (D-LMS and Decomposition and Reconstruction LMS algorithm (DR-LMS based on Wavelet can significantly improve the noise reduction performance of MRE control system compared with traditional LMS algorithm.
NLSE: Parameter-Based Inversion Algorithm
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
Star point centroid algorithm based on background forecast
Wang, Jin; Zhao, Rujin; Zhu, Nan
2014-09-01
The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.
Suttle, L. G.; Hare, J. D.; Lebedev, S. V.; Ciardi, A.; Loureiro, N. F.; Burdiak, G. C.; Chittenden, J. P.; Clayson, T.; Halliday, J. W. D.; Niasse, N.; Russell, D.; Suzuki-Vidal, F.; Tubman, E.; Lane, T.; Ma, J.; Robinson, T.; Smith, R. A.; Stuart, N.
2018-04-01
This work presents a magnetic reconnection experiment in which the kinetic, magnetic, and thermal properties of the plasma each play an important role in the overall energy balance and structure of the generated reconnection layer. Magnetic reconnection occurs during the interaction of continuous and steady flows of super-Alfvénic, magnetized, aluminum plasma, which collide in a geometry with two-dimensional symmetry, producing a stable and long-lasting reconnection layer. Optical Thomson scattering measurements show that when the layer forms, ions inside the layer are more strongly heated than electrons, reaching temperatures of Ti˜Z ¯ Te≳300 eV—much greater than can be expected from strong shock and viscous heating alone. Later in time, as the plasma density in the layer increases, the electron and ion temperatures are found to equilibrate, and a constant plasma temperature is achieved through a balance of the heating mechanisms and radiative losses of the plasma. Measurements from Faraday rotation polarimetry also indicate the presence of significant magnetic field pile-up occurring at the boundary of the reconnection region, which is consistent with the super-Alfvénic velocity of the inflows.
Research on compressive sensing reconstruction algorithm based on total variation model
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
CALCUL DU SPECTRE DE REFLEXION DU MULTICOUCHE Ni/C DANS LE DOMAINE DES RAYONS X
Directory of Open Access Journals (Sweden)
A MEDDOUR
2000-12-01
Full Text Available Le pouvoir réflecteur d’un dioptre quelconque dans le domaine des rayons X est trop faible, mais il est toujours possible de choisir des systèmes pouvant présenter un pic de réflexion d’intensité importante autour d’une incidence caractéristique du matériau. Ce dernier est un multicouche, composé de deux couches déposées en sandwich. Nous avons élaboré un programme qui permet de calculer la réflexion d’un tel matériau en suivant la méthode d’Abelès dans laquelle une couche mince est représentée par une matrice carrée contenant toutes les informations nécessaires pour le calcul de la réflexion. Ce programme tient compte aussi des rugosités aux interfaces du multicouche, vue leur importante influence sur l’intensité du pic apparaissant sur le spectre de réflexion. L’application du programme au multicouche Ni/C a montré l’existence d’un pic centré autour de 31.32°. Son intensité est sensible au nombre de périodes dans le multicouche, aux épaisseurs des couches minces de Ni et de C et à la taille des rugosités des interfaces Ni/C et C/Ni.
Parallel image encryption algorithm based on discretized chaotic map
International Nuclear Information System (INIS)
Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue
2008-01-01
Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms
A Modularity Degree Based Heuristic Community Detection Algorithm
Directory of Open Access Journals (Sweden)
Dongming Chen
2014-01-01
Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.
Creep force modelling for rail traction vehicles based on the Fastsim algorithm
Spiryagin, Maksym; Polach, Oldrich; Cole, Colin
2013-11-01
The evaluation of creep forces is a complex task and their calculation is a time-consuming process for multibody simulation (MBS). A methodology of creep forces modelling at large traction creepages has been proposed by Polach [Creep forces in simulations of traction vehicles running on adhesion limit. Wear. 2005;258:992-1000; Influence of locomotive tractive effort on the forces between wheel and rail. Veh Syst Dyn. 2001(Suppl);35:7-22] adapting his previously published algorithm [Polach O. A fast wheel-rail forces calculation computer code. Veh Syst Dyn. 1999(Suppl);33:728-739]. The most common method for creep force modelling used by software packages for MBS of running dynamics is the Fastsim algorithm by Kalker [A fast algorithm for the simplified theory of rolling contact. Veh Syst Dyn. 1982;11:1-13]. However, the Fastsim code has some limitations which do not allow modelling the creep force - creep characteristic in agreement with measurements for locomotives and other high-power traction vehicles, mainly for large traction creep at low-adhesion conditions. This paper describes a newly developed methodology based on a variable contact flexibility increasing with the ratio of the slip area to the area of adhesion. This variable contact flexibility is introduced in a modification of Kalker's code Fastsim by replacing the constant Kalker's reduction factor, widely used in MBS, by a variable reduction factor together with a slip-velocity-dependent friction coefficient decreasing with increasing global creepage. The proposed methodology is presented in this work and compared with measurements for different locomotives. The modification allows use of the well recognised Fastsim code for simulation of creep forces at large creepages in agreement with measurements without modifying the proven modelling methodology at small creepages.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Directory of Open Access Journals (Sweden)
Huimin Lu
2013-01-01
Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Image segmentation algorithm based on T-junctions cues
Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie
2016-03-01
To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng
2018-01-01
Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.
Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks
Ren, Shengwei; Zhang, Li; Zhang, Shibing
2016-10-01
Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.
Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos
Directory of Open Access Journals (Sweden)
Jian Wang
2016-01-01
Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.
Betweenness-based algorithm for a partition scale-free graph
International Nuclear Information System (INIS)
Zhang Bai-Da; Wu Jun-Jie; Zhou Jing; Tang Yu-Hua
2011-01-01
Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom—up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top—down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches. (interdisciplinary physics and related areas of science and technology)
Analog Circuit Design Optimization Based on Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Mansour Barari
2014-01-01
Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.
Improved dynamic-programming-based algorithms for segmentation of masses in mammograms
International Nuclear Information System (INIS)
Dominguez, Alfonso Rojas; Nandi, Asoke K.
2007-01-01
In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions
Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis
Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.
2018-04-01
Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.
LSB Based Quantum Image Steganography Algorithm
Jiang, Nan; Zhao, Na; Wang, Luo
2016-01-01
Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.
Human resource recommendation algorithm based on ensemble learning and Spark
Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie
2017-08-01
Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.
A Proposal for User-defined Reductions in OpenMP
Energy Technology Data Exchange (ETDEWEB)
Duran, A; Ferrer, R; Klemm, M; de Supinski, B R; Ayguade, E
2010-03-22
Reductions are commonly used in parallel programs to produce a global result from partial results computed in parallel. Currently, OpenMP only supports reductions for primitive data types and a limited set of base language operators. This is a significant limitation for those applications that employ user-defined data types (e. g., objects). Implementing manual reduction algorithms makes software development more complex and error-prone. Additionally, an OpenMP runtime system cannot optimize a manual reduction algorithm in ways typically applied to reductions on primitive types. In this paper, we propose new mechanisms to allow the use of most pre-existing binary functions on user-defined data types as User-Defined Reduction (UDR) operators. Our measurements show that our UDR prototype implementation provides consistently good performance across a range of thread counts without increasing general runtime overheads.
Charge transfer effects in electrocatalytic Ni-C revealed by x-ray photoelectron spectroscopy
Energy Technology Data Exchange (ETDEWEB)
Haslam, G. E.; Chin, X.-Y.; Burstein, G. T. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke St., Cambridge CB2 3QZ (United Kingdom); Sato, K.; Mizokawa, T. [Department of Complexity Science and Engineering, University of Tokyo, 5-1-5 Kashiwanoha, Chiba 277-8651 (Japan)
2012-06-04
Binary Ni-C thin-film alloys, which have been shown to be passive against corrosion in hot sulphuric acid solution whilst also being electrocatalytically active, were investigated by XPS to determine the oxidation state of the metal and carbon components. The Ni component produces a Ni 2p spectrum similar to that of metallic nickel (i.e., no oxidation occurs) but with a 0.3 eV shift to higher binding energy (BE) due to electron donation to the carbon matrix. The C 1s peak shows a shift to lower BE by accepting electrons from the Ni nanocrystals. A cluster-model analysis of the observed Ni 2p spectrum is consistent with the electron transfer from the nickel to the carbon.
Secure image encryption algorithm design using a novel chaos based S-Box
International Nuclear Information System (INIS)
Çavuşoğlu, Ünal; Kaçar, Sezgin; Pehlivan, Ihsan; Zengin, Ahmet
2017-01-01
Highlights: • A new chaotic system is developed for creating S-Box and image encryption algorithm. • Chaos based random number generator is designed with the help of the new chaotic system. NIST tests are run on generated random numbers to verify randomness. • A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. • The new developed S-Box based image encryption algorithm is introduced and image encryption application is carried out. • To show the quality and strong of the encryption process, security analysis are performed and compared with the AES and chaos algorithms. - Abstract: In this study, an encryption algorithm that uses chaos based S-BOX is developed for secure and speed image encryption. First of all, a new chaotic system is developed for creating S-Box and image encryption algorithm. Chaos based random number generator is designed with the help of the new chaotic system. Then, NIST tests are run on generated random numbers to verify randomness. A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. As the next step, the new developed S-Box based image encryption algorithm is introduced in detail. Finally, image encryption application is carried out. To show the quality and strong of the encryption process, security analysis are performed. Proposed algorithm is compared with the AES and chaos algorithms. According to tests results, the proposed image encryption algorithm is secure and speed for image encryption application.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
TaDb: A time-aware diffusion-based recommender algorithm
Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan
2015-02-01
Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.
Services of the Czechoslovak Nuclear Information Centre, based on IAEA information sources
International Nuclear Information System (INIS)
Dufkova, M.
1987-01-01
Information services provided by the Nuclear Information Centre (NIC) as the sector information centre for the Czechoslovak nuclear programme proceed primarily from its membership of INIS. From INIS Atomindex tapes are computer-processed SDI searches, their price for one query is 3,948 Czechoslovak crowns per year. The user can at any time put forward a request for tuning or for a change of the initially requested query formulation. A copy of SDI searches is provided for 1,200 Czechoslovak crowns per annum to other interested persons or institutions who cannot, however, influence the query formulation. Since 1979, the NIC has been processing retrospective searches by direct online access to the INIS data base. The price of these searches ranges between 1,000 and 1,500 Czechoslovak crowns. Under the same conditions the NIC also provides retrospective searches from AGRIS. Since 1986, the NIC has extended its services by providing data from the UVTEI-UTZ data base centre in Prague. Retrospective searches can since 1985 be processed directly at the workplace of those interested, this through mobile terminals. All said services are followed up by services provided by the NIC library which contains more than 215,000 microfiches with full texts of nonconventional documents incorporated in INIS. (Z.M.)
Incident Light Frequency-Based Image Defogging Algorithm
Directory of Open Access Journals (Sweden)
Wenbo Zhang
2017-01-01
Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.
Balouchestani, Mohammadreza; Krishnan, Sridhar
2014-01-01
Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.
Pankow, James F; Barsanti, Kelley C; Peyton, David H
2003-01-01
Solution 1H NMR (proton-NMR) spectroscopy was used to measure the distribution of nicotine between its free-base and protonated forms at 20 degrees C in (a) water; (b) glycerin/water mixtures; and (c) puff-averaged "smoke" particulate matter (PM) produced by the Eclipse cigarette, a so-called "harm reduction" cigarette manufactured by R. J. Reynolds (RJR) Tobacco Co. Smoke PM from the Eclipse contains glycerin, water, nicotine, and numerous other components. Smoke PM from the Eclipse yielded a signal for the three N-methyl protons on nicotine at a chemical shift of delta (ppm) = 2.79 relative to a trimethylsilane standard. With alpha fb = fraction of the total liquid nicotine in free-base form, and alpha a = fraction in the acidic, monoprotonated NicH+ form, then alpha a + alpha fb approximately 1. (The diprotonated form of nicotine was assumed negligible.) When the three types of solutions were adjusted so that alpha a approximately 1, the N-methyl protons yielded delta a = 2.82 (Eclipse smoke PM); 2.79 (35% water/65% glycerin); and 2.74 (water). When the solutions were adjusted so that alpha fb approximately 1, the N-methyl protons yielded delta fb = 2.16 (Eclipse smoke PM); 2.13 (35% water/65% glycerin); and 2.10 (water). In all of the solutions, the rate of proton exchange between NicH+ and Nic was fast relative to the 1H-NMR chemical shift difference in hertz. Each solution containing both NicH+ and Nic thus yielded a single N-methyl peak at a delta given by delta = alpha a delta a + alpha fb delta fb so that delta varied linearly between delta a and delta fb. Since alpha fb = (delta a-delta)/(delta a-delta fb), then delta = 2.79 for the unadjusted Eclipse smoke PM indicates alpha fb approximately 0.04. The effective pH of the Eclipse smoke PM at 20 degrees C may then be calculated as pHeff = 8.06 + log[alpha fb/(1-alpha fb)] = 6.69, where 8.06 is the pKa of NicH+ in water at 20 degrees C. The measurements obtained for the puff-averaged Eclipse smoke PM
A Turn-Projected State-Based Conflict Resolution Algorithm
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
Objective assessment of image quality and dose reduction in CT iterative reconstruction
Energy Technology Data Exchange (ETDEWEB)
Vaishnav, J. Y., E-mail: jay.vaishnav@fda.hhs.gov; Jung, W. C. [Diagnostic X-Ray Systems Branch, Office of In Vitro Diagnostic Devices and Radiological Health, Center for Devices and Radiological Health, United States Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States); Popescu, L. M.; Zeng, R.; Myers, K. J. [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, United States Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.
Objective assessment of image quality and dose reduction in CT iterative reconstruction
International Nuclear Information System (INIS)
Vaishnav, J. Y.; Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.
2014-01-01
Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality
DFT-Domain Based Single-Microphone Noise Reduction for Speech Enhancement
DEFF Research Database (Denmark)
C. Hendriks, Richard; Gerkmann, Timo; Jensen, Jesper
As speech processing devices like mobile phones, voice controlled devices, and hearing aids have increased in popularity, people expect them to work anywhere and at any time without user intervention. However, the presence of acoustical disturbances limits the use of these applications, degrades...... their performance, or causes the user difficulties in understanding the conversation or appreciating the device. A common way to reduce the effects of such disturbances is through the use of single-microphone noise reduction algorithms for speech enhancement. The field of single-microphone noise reduction...
F5C: A variant of Faugère’s F5 algorithm with reduced Gröbner bases
Eder, Christian; Perry, John
2010-01-01
Faugere's F5 algorithm computes a Groebner basis incrementally, by computing a sequence of (non-reduced) Groebner bases. The authors describe a variant of F5, called F5C, that replaces each intermediate Groebner basis with its reduced Groebner basis. As a result, F5C considers fewer polynomials and performs substantially fewer polynomial reductions, so that it terminates more quickly. We also provide a generalization of Faugere's characterization theorem for Groebner bases.
Directory of Open Access Journals (Sweden)
Arif Imam Suroso
2011-08-01
Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} This study was aimed at finding out the existence of stress in work towards employees’ performance, and knowing the indicators of each shaper element of that influencing the employees’ performance of plant department of agribusiness industry at PT. NIC. The method of this study is case study involving 155 respondents. Using the Structural Equation Modelling (SEM, it is known that the influence of stress in work towards employees’ performance is significantly negative. It means that the advance of stress in work can take the employees’ performance down. The advance of stress in work was stimulated by stressor, in this case are job pressure and lack of support. The relationship between stressor and stress in work is positively significant. Lack of support is the most influence indicator of stressor variable instead of job pressure. This study concludes that stress in work significantly influence the employees’ performance. Generally, stress in work at plant department of PT. NIC is in low category (41.9% and the performance is in high/good category (60.6%. It means that the existence of stress in work at this time have positive characteristics because it has played the role as motivator to work.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.
2016-08-01
SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI
Analysis and improvement of a chaos-based image encryption algorithm
International Nuclear Information System (INIS)
Xiao Di; Liao Xiaofeng; Wei Pengcheng
2009-01-01
The security of digital image attracts much attention recently. In Guan et al. [Guan Z, Huang F, Guan W. Chaos-based image encryption algorithm. Phys Lett A 2005; 346: 153-7.], a chaos-based image encryption algorithm has been proposed. In this paper, the cause of potential flaws in the original algorithm is analyzed in detail, and then the corresponding enhancement measures are proposed. Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.
An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization
Directory of Open Access Journals (Sweden)
Rafał Dreżewski
2017-08-01
Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.
Infrastructure system restoration planning using evolutionary algorithms
Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.
2016-01-01
This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.
Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm
Directory of Open Access Journals (Sweden)
Guangbin Wang
2015-01-01
Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.
Reductive Catalytic Fractionation of Corn Stover Lignin
Energy Technology Data Exchange (ETDEWEB)
Anderson, Eric M.; Katahira, Rui; Reed, Michelle; Resch, Michael G.; Karp, Eric M.; Beckham, Gregg T.; Román-Leshkov, Yuriy
2016-12-05
Reductive catalytic fractionation (RCF) has emerged as an effective biomass pretreatment strategy to depolymerize lignin into tractable fragments in high yields. We investigate the RCF of corn stover, a highly abundant herbaceous feedstock, using carbon-supported Ru and Ni catalysts at 200 and 250 degrees C in methanol and, in the presence or absence of an acid cocatalyst (H3PO4 or an acidified carbon support). Three key performance variables were studied: (1) the effectiveness of lignin extraction as measured by the yield of lignin oil, (2) the yield of monomers in the lignin oil, and (3) the carbohydrate retention in the residual solids after RCF. The monomers included methyl coumarate/ferulate, propyl guaiacol/syringol, and ethyl guaiacol/syringol. The Ru and Ni catalysts performed similarly in terms of product distribution and monomer yields. The monomer yields increased monotonically as a function of time for both temperatures. At 6 h, monomer yields of 27.2 and 28.3% were obtained at 250 and 200 degrees C, respectively, with Ni/C. The addition of an acid cocatalysts to the Ni/C system increased monomer yields to 32% for acidified carbon and 38% for phosphoric acid at 200 degrees C. The monomer product distribution was dominated by methyl coumarate regardless of the use of the acid cocatalysts. The use of phosphoric acid at 200 degrees C or the high temperature condition without acid resulted in complete lignin extraction and partial sugar solubilization (up to 50%) thereby generating lignin oil yields that exceeded the theoretical limit. In contrast, using either Ni/C or Ni on acidified carbon at 200 degrees C resulted in moderate lignin oil yields of ca. 55%, with sugar retention values >90%. Notably, these sugars were amenable to enzymatic digestion, reaching conversions >90% at 96 h. Characterization studies on the lignin oils using two-dimensional heteronuclear single quantum coherence nuclear magnetic resonance and gel permeation chromatrography revealed
Visual Perception Based Rate Control Algorithm for HEVC
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Breen, H J; Rogers, P; Johnson, N W; Slaney, R
1999-08-01
Clinical periodontal measurement is plagued by many sources of error which result in aberrant values (outliers). This study sets out to compare probeable crevice depth measurements (PCD) selected by the option-4 algorithm against those recorded with a conventional double-pass method and to quantify any reduction in site-specific PCD variances. A single clinician recorded full-mouth PCD at 1 visit in 32 subjects (mean age 45.5 years) with moderately advanced chronic adult periodontitis. PCD was recorded over 2 passes at 6 sites per tooth with the Florida Pocket Depth Probes, a 3rd generation probe. The option-4 algorithm compared the 1st pass site-specific PCD value (PCD1) to the 2nd pass site-specific PCD value (PCD2) and, if the difference between these values was >1.00 mm, allowed the recording of a maximum of 2 further measurements (3rd and 4th pass measurements PCD3 and PCD4): 4 site-specific measure-meets were considered to be the maximum subject and tissue tolerance. The algorithm selected the 1st 2 measurements whose difference was difference Y) (Y=[(A-B)/A]X 100) and a 75% reduction in the median site-specific variance of PCD1/PCD2.
Directory of Open Access Journals (Sweden)
Taline Bavaresco
2012-12-01
Full Text Available OBJECTIVE: to validate the Nursing Intervention Classifications (NIC for the diagnosis 'Risk of Impaired Skin Integrity' in patients at risk of pressure ulcers (PU. METHOD: the sample comprised 16 expert nurses. The data was collected with an instrument about the interventions and their definitions were scored on a Likert scale by the experts. The data was analyzed statistically, using the calculation of weighted averages (WA. The study was approved by the Research Ethics Committee (56/2010. RESULTS: nine interventions were validated as 'priority' (WA ≥0.80, among them Prevention of PU (MP=0.92; 22 as 'suggested' (WA >0.50 and OBJETIVO: validar las intervenciones de la clasificación de enfermería NIC para el diagnóstico Riesgo de Integridad de la Piel Perjudicada en pacientes en riesgo de úlcera por presión (UP. MÉTODO: la muestra fue compuesta por 16 enfermeras experts. Los datos colectados en instrumento con las intervenciones, su definición y una escala Likert puntuada por las experts. Los datos analizados estadísticamente, utilizándose cálculo de media ponderada (MP. Estudio aprobado en Comité de Ética e Investigación (56/2010. RESULTADOS: se validaron nueve intervenciones como prioritarias (MP ≥0,80, entre ellas Prevención de UP (MP=0,92; 22 como sugeridas (MP >0,50 OBJETIVO: validar as intervenções da classificação de enfermagem NIC para o diagnóstico Risco de Integridade da Pele Prejudicada, em pacientes em risco de úlcera por pressão (UP. MÉTODO: a amostra foi composta por 16 enfermeiras experts. Os dados foram coletados em instrumento contendo a caracterização das participantes, além de uma tabela com as intervenções e a definição de cada uma delas, bem como uma escala Likert que foi pontuada pelas experts. Os dados foram analisados estatisticamente, utilizando-se cálculo de média ponderada (MP. O estudo foi aprovado em Comitê de Ética e Pesquisa (56/2010. RESULTADOS: validaram-se nove interven
Energy Technology Data Exchange (ETDEWEB)
Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu
2016-11-15
Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
International Nuclear Information System (INIS)
Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang
2010-01-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed
Evolving a polymerase for hydrophobic base analogues.
Loakes, David; Gallego, José; Pinheiro, Vitor B; Kool, Eric T; Holliger, Philipp
2009-10-21
Hydrophobic base analogues (HBAs) have shown great promise for the expansion of the chemical and coding potential of nucleic acids but are generally poor polymerase substrates. While extensive synthetic efforts have yielded examples of HBAs with favorable substrate properties, their discovery has remained challenging. Here we describe a complementary strategy for improving HBA substrate properties by directed evolution of a dedicated polymerase using compartmentalized self-replication (CSR) with the archetypal HBA 5-nitroindole (d5NI) and its derivative 5-nitroindole-3-carboxamide (d5NIC) as selection substrates. Starting from a repertoire of chimeric polymerases generated by molecular breeding of DNA polymerase genes from the genus Thermus, we isolated a polymerase (5D4) with a generically enhanced ability to utilize HBAs. The selected polymerase. 5D4 was able to form and extend d5NI and d5NIC (d5NI(C)) self-pairs as well as d5NI(C) heteropairs with all four bases with efficiencies approaching, or exceeding, those of the cognate Watson-Crick pairs, despite significant distortions caused by the intercalation of the d5NI(C) heterocycles into the opposing strand base stack, as shown by nuclear magnetic resonance spectroscopy (NMR). Unlike Taq polymerase, 5D4 was also able to extend HBA pairs such as Pyrene: varphi (abasic site), d5NI: varphi, and isocarbostyril (ICS): 7-azaindole (7AI), allowed bypass of a chemically diverse spectrum of HBAs, and enabled PCR amplification with primers comprising multiple d5NI(C)-substitutions, while maintaining high levels of catalytic activity and fidelity. The selected polymerase 5D4 promises to expand the range of nucleobase analogues amenable to replication and should find numerous applications, including the synthesis and replication of nucleic acid polymers with expanded chemical and functional diversity.
Local Community Detection Algorithm Based on Minimal Cluster
Directory of Open Access Journals (Sweden)
Yong Zhou
2016-01-01
Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.
Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation
Directory of Open Access Journals (Sweden)
Suk-Ju Kang
2016-12-01
Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.
KM-FCM: A fuzzy clustering optimization algorithm based on Mahalanobis distance
Directory of Open Access Journals (Sweden)
Zhiwen ZU
2018-04-01
Full Text Available The traditional fuzzy clustering algorithm uses Euclidean distance as the similarity criterion, which is disadvantageous to the multidimensional data processing. In order to solve this situation, Mahalanobis distance is used instead of the traditional Euclidean distance, and the optimization of fuzzy clustering algorithm based on Mahalanobis distance is studied to enhance the clustering effect and ability. With making the initialization means by Heuristic search algorithm combined with k-means algorithm, and in terms of the validity function which could automatically adjust the optimal clustering number, an optimization algorithm KM-FCM is proposed. The new algorithm is compared with FCM algorithm, FCM-M algorithm and M-FCM algorithm in three standard data sets. The experimental results show that the KM-FCM algorithm is effective. It has higher clustering accuracy than FCM, FCM-M and M-FCM, recognizing high-dimensional data clustering well. It has global optimization effect, and the clustering number has no need for setting in advance. The new algorithm provides a reference for the optimization of fuzzy clustering algorithm based on Mahalanobis distance.
Solving conic optimization problems via self-dual embedding and facial reduction: A unified approach
DEFF Research Database (Denmark)
Permenter, Frank; Friberg, Henrik A.; Andersen, Erling D.
2017-01-01
it fails to return a primal-dual optimal solution or a certificate of infeasibility. Using this observation, we give an algorithm based on facial reduction for solving the primal problem that, in principle, always succeeds. (An analogous algorithm is easily stated for the dual problem.) This algorithm has...... the appealing property that it only performs facial reduction when it is required, not when it is possible; e.g., if a primal-dual optimal solution exists, it will be found in lieu of a facial reduction certificate even if Slater's condition fails. For the case of linear, second-order, and semidefinite...
Directory of Open Access Journals (Sweden)
Valeria Di Biase
2018-05-01
Full Text Available The paper aims to present the results obtained in the development of a system allowing for the detection and monitoring of forest fires and the continuous comparison of their intensity when several events occur simultaneously—a common occurrence in European Mediterranean countries during the summer season. The system, called SFIDE (Satellite FIre DEtection, exploits a geostationary satellite sensor (SEVIRI, Spinning Enhanced Visible and InfraRed Imager, on board of MSG, Meteosat Second Generation, satellite series. The algorithm was developed several years ago in the framework of a project (SIGRI funded by the Italian Space Agency (ASI. This algorithm has been completely reviewed in order to enhance its efficiency by reducing false alarms rate preserving a high sensitivity. Due to the very low spatial resolution of SEVIRI images (4 × 4 km2 at Mediterranean latitude the sensitivity of the algorithm should be very high to detect even small fires. The improvement of the algorithm has been obtained by: introducing the sun elevation angle in the computation of the preliminary thresholds to identify potential thermal anomalies (hot spots, introducing a contextual analysis in the detection of clouds and in the detection of night-time fires. The results of the algorithm have been validated in the Sardinia region by using ground true data provided by the regional Corpo Forestale e di Vigilanza Ambientale (CFVA. A significant reduction of the commission error (less than 10% has been obtained with respect to the previous version of the algorithm and also with respect to fire-detection algorithms based on low earth orbit satellites.
A novel image-domain-based cone-beam computed tomography enhancement algorithm
Energy Technology Data Exchange (ETDEWEB)
Li Xiang; Li Tianfang; Yang Yong; Heron, Dwight E; Huq, M Saiful, E-mail: lix@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA 15232 (United States)
2011-05-07
Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Directory of Open Access Journals (Sweden)
Mohammad Mizanur Rahman
2017-09-01
Full Text Available Electronic noses (E-Noses are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN, support vector machine (SVM, and multilayer perceptron neural network (MLPNN, are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN and generalized regression neural network (GRNN with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
Seismic active control by a heuristic-based algorithm
International Nuclear Information System (INIS)
Tang, Yu.
1996-01-01
A heuristic-based algorithm for seismic active control is generalized to permit consideration of the effects of control-structure interaction and actuator dynamics. Control force is computed at onetime step ahead before being applied to the structure. Therefore, the proposed control algorithm is free from the problem of time delay. A numerical example is presented to show the effectiveness of the proposed control algorithm. Also, two indices are introduced in the paper to assess the effectiveness and efficiency of control laws
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map
Directory of Open Access Journals (Sweden)
Xizhong Wang
2013-01-01
Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.
Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Christian Appold
2010-06-01
Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.
Positioning Reduction of Deep Space Probes Based on VLBI Tracking
Qiao, S. B.
2011-11-01
In the background of the Chinese Lunar Exploration Project and the Yinghuo Project, through theoretical analysis, algorithm study, software development, data simulation, real data processing and so on, the positioning reductions of the European lunar satellite Smart-1 and Mars Express (MEX) satellite, as well as the Chinese Chang'e-1 (CE-1) and Chang'e-2 (CE-2) satellites are accomplished by using VLBI and USB tracking data in this dissertation. The progress is made in various aspects including the development of theoretical model, the construction of observation equation, the analysis of the condition of normal equation, the selection and determination of the constraint, the analysis of data simulation, the detection of outliers in observations, the maintenance of the stability of the solution of parameters, the development of the practical software system, the processing of the real tracking data and so on. The details of the research progress in this dissertation are written as follows: (1) The algorithm is analyzed concerning the positioning reduction of the deep spacecraft based on VLBI tracking data. Through data simulation, it is analyzed for the effects of the bias in predicted orbit, the white noises and systematic errors in VLBI delays, and USB ranges on the positioning reduction of spacecraft. Results show that it is preferable to suppress the dispersion of positioning data points by applying the constraint of geocentric distance of spacecraft when there are only VLBI tracking data. The positioning solution is a biased estimate via observations of three VLBI stations. For the case of four tracking stations, the uncertainty of the constraint should be in accordance with the bias in the predicted orbit. White noises in delays and ranges mainly result in dispersion of the sequence of positioning data points. If there is the systematic error of observations, the systematic offset of the positioning results is caused, and there are trend jumps in the shape of
Energy Technology Data Exchange (ETDEWEB)
Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J [Lehigh Valley Health Network, Allentown, PA (United States)
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation
International Nuclear Information System (INIS)
Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J
2014-01-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation
Radioactivity nuclide identification based on BP and LM algorithm neural network
International Nuclear Information System (INIS)
Wang Jihong; Sun Jian; Wang Lianghou
2012-01-01
The paper provides the method which can identify radioactive nuclide based on the BP and LM algorithm neural network. Then, this paper compares the above-mentioned method with FR algorithm. Through the result of the Matlab simulation, the method of radioactivity nuclide identification based on the BP and LM algorithm neural network is superior to the FR algorithm. With the better effect and the higher accuracy, it will be the best choice. (authors)
Disruption of Alfvénic turbulence by magnetic reconnection in a collisionless plasma
Mallet, Alfred; Schekochihin, Alexander A.; Chandran, Benjamin D. G.
2017-12-01
We calculate the disruption scale \\text{D}$ at which sheet-like structures in dynamically aligned Alfvénic turbulence are destroyed by the onset of magnetic reconnection in a low- collisionless plasma. The scaling of \\text{D}$ depends on the order of the statistics being considered, with more intense structures being disrupted at larger scales. The disruption scale for the structures that dominate the energy spectrum is \\text{D}\\sim L\\bot 1/9(de\\unicode[STIX]{x1D70C}s)4/9$ , where e$ is the electron inertial scale, s$ is the ion sound scale and \\bot $ is the outer scale of the turbulence. When e$ and s/L\\bot $ are sufficiently small, the scale \\text{D}$ is larger than s$ and there is a break in the energy spectrum at \\text{D}$ , rather than at s$ . We propose that the fluctuations produced by the disruption are circularised flux ropes, which may have already been observed in the solar wind. We predict the relationship between the amplitude and radius of these structures and quantify the importance of the disruption process to the cascade in terms of the filling fraction of undisrupted structures and the fractional reduction of the energy contained in them at the ion sound scale s$ . Both of these fractions depend strongly on e$ , with the disrupted structures becoming more important at lower e$ . Finally, we predict that the energy spectrum between \\text{D}$ and s$ is steeper than \\bot -3$ , when this range exists. Such a steep `transition range' is sometimes observed in short intervals of solar-wind turbulence. The onset of collisionless magnetic reconnection may therefore significantly affect the nature of plasma turbulence around the ion gyroscale.
A new edge detection algorithm based on Canny idea
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2011-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.
A Coupled User Clustering Algorithm Based on Mixed Data for Web-Based Learning Systems
Directory of Open Access Journals (Sweden)
Ke Niu
2015-01-01
Full Text Available In traditional Web-based learning systems, due to insufficient learning behaviors analysis and personalized study guides, a few user clustering algorithms are introduced. While analyzing the behaviors with these algorithms, researchers generally focus on continuous data but easily neglect discrete data, each of which is generated from online learning actions. Moreover, there are implicit coupled interactions among the data but are frequently ignored in the introduced algorithms. Therefore, a mass of significant information which can positively affect clustering accuracy is neglected. To solve the above issues, we proposed a coupled user clustering algorithm for Wed-based learning systems by taking into account both discrete and continuous data, as well as intracoupled and intercoupled interactions of the data. The experiment result in this paper demonstrates the outperformance of the proposed algorithm.
Network-based recommendation algorithms: A review
Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš
2016-06-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
Generalized Time-Limited Balanced Reduction Method
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Shaker, Fatemeh
2013-01-01
In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians...... and in such a way it improves the accuracy of the approximation within the time-interval which the method is applied. The time-interval generalized gramians are the solutions to the generalized time-interval Lyapunov equations. The conditions for these equations to be solvable are derived and an algorithm...
Huertas Guzmán, Vilma Adriana; Ávila Vargas, Yuly Esperanza
2015-01-01
La presente investigación tiene por objetivo principal diseñar un modelo de optimización Financiera para el reconocimiento contable del impuesto diferido en Colombia– norma internacional contable 12 (nic12) y su incidencia en la información financiera a revelar, se parte de hacer un seguimiento bibliográfico desde las diferentes teorías que sean desarrollado a la luz de la temática central, se propone un estudio de caso dentro de la metodología y finalmente una propuesta de utilidad a las nec...
Teaching learning based optimization algorithm and its engineering applications
Rao, R Venkata
2016-01-01
Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm
Directory of Open Access Journals (Sweden)
Zhang Fang Hu
2014-04-01
Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Directory of Open Access Journals (Sweden)
Zhiming Song
2015-01-01
Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.
A cross-disciplinary introduction to quantum annealing-based algorithms
Venegas-Andraca, Salvador E.; Cruz-Santos, William; McGeoch, Catherine; Lanzagorta, Marco
2018-04-01
A central goal in quantum computing is the development of quantum hardware and quantum algorithms in order to analyse challenging scientific and engineering problems. Research in quantum computation involves contributions from both physics and computer science; hence this article presents a concise introduction to basic concepts from both fields that are used in annealing-based quantum computation, an alternative to the more familiar quantum gate model. We introduce some concepts from computer science required to define difficult computational problems and to realise the potential relevance of quantum algorithms to find novel solutions to those problems. We introduce the structure of quantum annealing-based algorithms as well as two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems. An overview of the quantum annealing systems manufactured by D-Wave Systems is also presented.
New MPPT algorithm based on hybrid dynamical theory
Elmetennani, Shahrazed
2014-11-01
This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.
New MPPT algorithm based on hybrid dynamical theory
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.
2014-01-01
This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Segment-based dose optimization using a genetic algorithm
International Nuclear Information System (INIS)
Cotrutz, Cristian; Xing Lei
2003-01-01
Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning
Directory of Open Access Journals (Sweden)
EMIROGLU, S.
2017-11-01
Full Text Available This paper proposes a distributed reactive power control based approach to deploy Volt/VAr optimization (VVO / Conservation Voltage Reduction (CVR algorithm in a distribution network with distributed generations (DG units and distribution static synchronous compensators (D-STATCOM. A three-phase VVO/CVR problem is formulated and the reactive power references of D-STATCOMs and DGs are determined in a distributed way by decomposing the VVO/CVR problem into voltage and reactive power control. The main purpose is to determine the coordination between voltage regulator (VR and reactive power sources (Capacitors, D-STATCOMs and DGs based on VVO/CVR. The study shows that the reactive power injection capability of DG units may play an important role in VVO/CVR. In addition, it is shown that the coordination of VR and reactive power sources does not only save more energy and power but also reduces the power losses. Moreover, the proposed VVO/CVR algorithm reduces the computational burden and finds fast solutions. To illustrate the effectiveness of the proposed method, the VVO/CVR is performed on the IEEE 13-node test system feeder considering unbalanced loading and line configurations. The tests are performed taking the practical voltage-dependent load modeling and different customer types into consideration to improve accuracy.
Directory of Open Access Journals (Sweden)
Vivek Patel
2012-08-01
Full Text Available Nature inspired population based algorithms is a research field which simulates different natural phenomena to solve a wide range of problems. Researchers have proposed several algorithms considering different natural phenomena. Teaching-Learning-based optimization (TLBO is one of the recently proposed population based algorithm which simulates the teaching-learning process of the class room. This algorithm does not require any algorithm-specific control parameters. In this paper, elitism concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 35 constrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. The proposed algorithm can be applied to various optimization problems of the industrial environment.
Electron dose map inversion based on several algorithms
International Nuclear Information System (INIS)
Li Gui; Zheng Huaqing; Wu Yican; Fds Team
2010-01-01
The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)
New calibration algorithms for dielectric-based microwave moisture sensors
New calibration algorithms for determining moisture content in granular and particulate materials from measurement of the dielectric properties at a single microwave frequency are proposed. The algorithms are based on identifying empirically correlations between the dielectric properties and the par...
A Data Forward Stepwise Fitting Algorithm Based on Orthogonal Function System
Directory of Open Access Journals (Sweden)
Li Han-Ju
2017-01-01
Full Text Available Data fitting is the main method of functional data analysis, and it is widely used in the fields of economy, social science, engineering technology and so on. Least square method is the main method of data fitting, but the least square method is not convergent, no memory property, big fitting error and it is easy to over fitting. Based on the orthogonal trigonometric function system, this paper presents a data forward stepwise fitting algorithm. This algorithm takes forward stepwise fitting strategy, each time using the nearest base function to fit the residual error generated by the previous base function fitting, which makes the residual mean square error minimum. In this paper, we theoretically prove the convergence, the memory property and the fitting error diminishing character for the algorithm. Experimental results show that the proposed algorithm is effective, and the fitting performance is better than that of the least square method and the forward stepwise fitting algorithm based on the non-orthogonal function system.
Automated Vectorization of Decision-Based Algorithms
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Directory of Open Access Journals (Sweden)
Yen Ten-Yang
2011-02-01
Full Text Available Abstract Background Determining the disulfide (S-S bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID. Unfortunately, this can adversely impact the quality of results. Method We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results. Results The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space
Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm
Li, Xiao; Scaglione, Anna
2013-11-01
The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu
2017-06-17
Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.
Directory of Open Access Journals (Sweden)
Karla Vittori
2008-12-01
Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.
A chaos-based image encryption algorithm with variable control parameters
International Nuclear Information System (INIS)
Wang Yong; Wong, K.-W.; Liao Xiaofeng; Xiang Tao; Chen Guanrong
2009-01-01
In recent years, a number of image encryption algorithms based on the permutation-diffusion structure have been proposed. However, the control parameters used in the permutation stage are usually fixed in the whole encryption process, which favors attacks. In this paper, a chaos-based image encryption algorithm with variable control parameters is proposed. The control parameters used in the permutation stage and the keystream employed in the diffusion stage are generated from two chaotic maps related to the plain-image. As a result, the algorithm can effectively resist all known attacks against permutation-diffusion architectures. Theoretical analyses and computer simulations both confirm that the new algorithm possesses high security and fast encryption speed for practical image encryption.
Noise filtering algorithm for the MFTF-B computer based control system
International Nuclear Information System (INIS)
Minor, E.G.
1983-01-01
An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions
Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm
Directory of Open Access Journals (Sweden)
N. Sri Madhava Raja
2014-01-01
Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.
An algorithm to construct Groebner bases for solving integration by parts relations
International Nuclear Information System (INIS)
Smirnov, Alexander V.
2006-01-01
This paper is a detailed description of an algorithm based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. The algorithm is used to calculate Feynman integrals and has proved to be efficient in several complicated cases
An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network
Directory of Open Access Journals (Sweden)
Kai Hu
2013-01-01
Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.
An accurate projection algorithm for array processor based SPECT systems
International Nuclear Information System (INIS)
King, M.A.; Schwinger, R.B.; Cool, S.L.
1985-01-01
A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT
Directory of Open Access Journals (Sweden)
Chung-Ta Li
2014-01-01
Full Text Available We propose a species-based hybrid of the electromagnetism-like mechanism (EM and back-propagation algorithms (SEMBP for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm.
Optimization algorithm based on densification and dynamic canonical descent
Bousson, K.; Correia, S. D.
2006-07-01
Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.
δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions
Directory of Open Access Journals (Sweden)
Hengrong Ju
2014-01-01
Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.
Quantum Algorithms for Compositional Natural Language Processing
Directory of Open Access Journals (Sweden)
William Zeng
2016-08-01
Full Text Available We propose a new application of quantum computing to the field of natural language processing. Ongoing work in this field attempts to incorporate grammatical structure into algorithms that compute meaning. In (Coecke, Sadrzadeh and Clark, 2010, the authors introduce such a model (the CSC model based on tensor product composition. While this algorithm has many advantages, its implementation is hampered by the large classical computational resources that it requires. In this work we show how computational shortcomings of the CSC approach could be resolved using quantum computation (possibly in addition to existing techniques for dimension reduction. We address the value of quantum RAM (Giovannetti,2008 for this model and extend an algorithm from Wiebe, Braun and Lloyd (2012 into a quantum algorithm to categorize sentences in CSC. Our new algorithm demonstrates a quadratic speedup over classical methods under certain conditions.
A novel clustering algorithm based on quantum games
International Nuclear Information System (INIS)
Li Qiang; He Yan; Jiang Jingping
2009-01-01
Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
A Plagiarism Detection Algorithm based on Extended Winnowing
Directory of Open Access Journals (Sweden)
Duan Xuliang
2017-01-01
Full Text Available Plagiarism is a common problem faced by academia and education. Mature commercial plagiarism detection system has the advantages of comprehensive and high accuracy, but the expensive detection costs make it unsuitable for real-time, lightweight application environment such as the student assignments plagiarism detection. This paper introduces the method of extending classic Winnowing plagiarism detection algorithm, expands the algorithm in functionality. The extended algorithm can retain the text location and length information in original document while extracting the fingerprints of a document, so that the locating and marking for plagiarism text fragment are much easier to achieve. The experimental results and several years of running practice show that the expansion of the algorithm has little effect on its performance, normal hardware configuration of PC will be able to meet small and medium-sized applications requirements. Based on the characteristics of lightweight, high efficiency, reliability and flexibility of Winnowing, the extended algorithm further enhances the adaptability and extends the application areas.
Directory of Open Access Journals (Sweden)
Maria Gabriela L. d'Ottaviano-Morelli
2004-02-01
Full Text Available This study aimed to estimate and analyze the prevalence of cervical intraepithelial neoplasia (CIN and invasive cervical carcinoma based on cytological diagnosis. The study included 120,635 women undergoing cytological exams in public health services in the region of Campinas, São Paulo State, Brazil, between September 1998 and March 1999. Prevalence rates per 100,000 women were: 354 for CIN I; 255 for CIN II; 141 for CIN III; and 24 for invasive carcinoma. As age increased, prevalence rates and prevalence ratios decreased for CIN grades I and II and increased for CIN III until the 50-54 age group, decreasing thereafter The prevalence rate of invasive carcinoma increased with age. The prevalence pattern of CIN II was distinct from that of CIN III, but similar to that of CIN I. This would not have been observed if the Bethesda System had been used for cytological diagnosis. Mean age at time of CIN II diagnosis was about 10 years less than for CIN III diagnosis. Therefore, a high-grade lesion diagnosed in a young woman according to the Bethesda System would probably be a CIN II, whereas in an older woman it would probably be a CIN III.O objetivo deste estudo foi estimar e analisar a prevalência das neoplasias intra-epiteliais cervicais (NIC e do carcinoma invasivo do colo uterino, com base no diagnóstico citológico. Foram incluídas 120.635 mulheres que realizaram o exame citológico, entre setembro de 1998 a março de 1999, nos serviços públicos de saúde da região de Campinas, Brasil. As prevalências por 100 mil mulheres foram: 354 para NIC I; 255 para NIC II; 141 para NIC III e 24 de carcinoma invasivo. À medida que a idade aumentou, as prevalências e razões de prevalência diminuíram para NIC I e NIC II, e aumentaram para NIC III até 50-54 anos, decrescendo após. A prevalência do carcinoma invasivo aumentou com a idade. O padrão da prevalência da NIC II é distinto do padrão da NIC III e semelhante ao da NIC I, o que n
A similarity based agglomerative clustering algorithm in networks
Liu, Zhiyuan; Wang, Xiujuan; Ma, Yinghong
2018-04-01
The detection of clusters is benefit for understanding the organizations and functions of networks. Clusters, or communities, are usually groups of nodes densely interconnected but sparsely linked with any other clusters. To identify communities, an efficient and effective community agglomerative algorithm based on node similarity is proposed. The proposed method initially calculates similarities between each pair of nodes, and form pre-partitions according to the principle that each node is in the same community as its most similar neighbor. After that, check each partition whether it satisfies community criterion. For the pre-partitions who do not satisfy, incorporate them with others that having the biggest attraction until there are no changes. To measure the attraction ability of a partition, we propose an attraction index that based on the linked node's importance in networks. Therefore, our proposed method can better exploit the nodes' properties and network's structure. To test the performance of our algorithm, both synthetic and empirical networks ranging in different scales are tested. Simulation results show that the proposed algorithm can obtain superior clustering results compared with six other widely used community detection algorithms.
New Parallel Algorithms for Landscape Evolution Model
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
International Nuclear Information System (INIS)
Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Cicero, F. Lo; Lonardo, A.; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Chiozzi, S.; Ramusino, A. Cotta; Fiorini, M.; Gianoli, A.; Neri, I.; Lorenzo, S. Di; Fantechi, R.; Piandani, R.; Pontisso, L.; Lamanna, G.; Piccini, M.
2017-01-01
This project aims to exploit the parallel computing power of a commercial Graphics Processing Unit (GPU) to implement fast pattern matching in the Ring Imaging Cherenkov (RICH) detector for the level 0 (L0) trigger of the NA62 experiment. In this approach, the ring-fitting algorithm is seedless, being fed with raw RICH data, with no previous information on the ring position from other detectors. Moreover, since the L0 trigger is provided with a more elaborated information than a simple multiplicity number, it results in a higher selection power. Two methods have been studied in order to reduce the data transfer latency from the readout boards of the detector to the GPU, i.e., the use of a dedicated NIC device driver with very low latency and a direct data transfer protocol from a custom FPGA-based NIC to the GPU. The performance of the system, developed through the FPGA approach, for multi-ring Cherenkov online reconstruction obtained during the NA62 physics runs is presented.
International Nuclear Information System (INIS)
Rao, R. Venkata; Rai, Dhiraj P.
2017-01-01
Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).
Energy Technology Data Exchange (ETDEWEB)
Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)
2017-05-15
Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).
Possible Noise Nature of Elsässer Variable z- in Highly Alfvénic Solar Wind Fluctuations
Wang, X.; Tu, C.-Y.; He, J.-S.; Wang, L.-H.; Yao, S.; Zhang, L.
2018-01-01
It has been a long-standing debate on the nature of Elsässer variable z- observed in the solar wind fluctuations. It is widely believed that z- represents inward propagating Alfvén waves and interacts nonlinearly with z+ (outward propagating Alfvén waves) to generate energy cascade. However, z- variations sometimes show a feature of convective structures. Here we present a new data analysis on autocorrelation functions of z- in order to get some definite information on its nature. We find that there is usually a large drop on the z- autocorrelation function when the solar wind fluctuations are highly Alfvénic. The large drop observed by Helios 2 spacecraft near 0.3 AU appears at the first nonzero time lag τ = 81 s, where the value of the autocorrelation coefficient drops to 25%-65% of that at τ = 0 s. Beyond the first nonzero time lag, the autocorrelation coefficient decreases gradually to zero. The drop of z- correlation function also appears in the Wind observations near 1 AU. These features of the z- correlation function may suggest that z- fluctuations consist of two components: high-frequency white noise and low-frequency pseudo structures, which correspond to flat and steep parts of z- power spectrum, respectively. This explanation is confirmed by doing a simple test on an artificial time series, which is obtained from the superposition of a random data series on its smoothed sequence. Our results suggest that in highly Alfvénic fluctuations, z- may not contribute importantly to the interactions with z+ to produce energy cascade.
Directory of Open Access Journals (Sweden)
Carlos VARGAS VASSEROT
2007-01-01
Full Text Available De la NIC 32 se desprende que las aportaciones de los socios al capital se reconocerán como patrimonio neto sólo si la cooperativa tiene un derecho incondicional a rehusar su reembolso. Lo que ocurre es que nuestro ordenamiento cooperativo, en aras de proteger al máximo al socio, incluso frente al riesgo de amenazar la estabilidad de la cooperativa, históricamente ha reconocido un derecho cuasi absoluto al reembolso de sus aportaciones, que la cooperativa tiene que atender aunque esto signifique tener que reducir el capital estatutario o incluso la disolución de la misma. Esto, tal como está en nuestra legislación cooperativa reconocido el derecho al reembolso de las aportaciones del socio, que aunque es posible someterlo a una serie de limitaciones impuestas temporales e incluso cuantitativa, no se puede impedir su ejercicio, significa que todas las aportaciones al capital social de las cooperativas que hayan hecho o que hagan los socios deberán ser calificados a efectos contables como pasivos exigibles y no como hasta ahora como recursos propios. Por tanto, en el ordenamiento cooperativo español, si se quiere evitar que todas las aportaciones de los socios al capital cooperativo sean consideradas recursos ajenos, es necesario realizar una serie de modificaciones legales en la articulación del derecho de reembolso y en el propio régimen del capital social. En el presente estudio se abordará la proyectada reforma de la Ley 27/1999 de Cooperativas para adaptarse al contenido de la NIC 32 y se estudiarán cuáles son las consecuencias de calificar contablemente las aportaciones sociales de los socios como recursos ajenos y no como neto patrimonial como hasta ahora habíamos hecho. /According the IAS 32 co-operative social shares will no longer be considered as elements of capital. They will be recognised as net worth just if the co-operative has an unconditional right to reject its redemption. However, in order to protect to members
2013-01-01
Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202
Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian
2017-11-01
A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.
Teaching AI Search Algorithms in a Web-Based Educational System
Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis
2013-01-01
In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…
A Developed Artificial Bee Colony Algorithm Based on Cloud Model
Directory of Open Access Journals (Sweden)
Ye Jin
2018-04-01
Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
Geometric noise reduction for multivariate time series.
Mera, M Eugenia; Morán, Manuel
2006-03-01
We propose an algorithm for the reduction of observational noise in chaotic multivariate time series. The algorithm is based on a maximum likelihood criterion, and its goal is to reduce the mean distance of the points of the cleaned time series to the attractor. We give evidence of the convergence of the empirical measure associated with the cleaned time series to the underlying invariant measure, implying the possibility to predict the long run behavior of the true dynamics.
A Greedy Algorithm for Neighborhood Overlap-Based Community Detection
Directory of Open Access Journals (Sweden)
Natarajan Meghanathan
2016-01-01
Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Directory of Open Access Journals (Sweden)
Mohamed Yaseen Jabarulla
2018-05-01
Full Text Available Ultrasound images are corrupted with multiplicative noise known as speckle, which reduces the effectiveness of image processing and hampers interpretation. This paper proposes a multiplicative speckle suppression technique for ultrasound liver images, based on a new signal reconstruction model known as sparse representation (SR over dictionary learning. In the proposed technique, the non-uniform multiplicative signal is first converted into additive noise using an enhanced homomorphic filter. This is followed by pixel-based total variation (TV regularization and patch-based SR over a dictionary trained using K-singular value decomposition (KSVD. Finally, the split Bregman algorithm is used to solve the optimization problem and estimate the de-speckled image. The simulations performed on both synthetic and clinical ultrasound images for speckle reduction, the proposed technique achieved peak signal-to-noise ratios of 35.537 dB for the dictionary trained on noisy image patches and 35.033 dB for the dictionary trained using a set of reference ultrasound image patches. Further, the evaluation results show that the proposed method performs better than other state-of-the-art denoising algorithms in terms of both peak signal-to-noise ratio and subjective visual quality assessment.
Energy Technology Data Exchange (ETDEWEB)
Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)
2016-03-15
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Tolerance based algorithms for the ATSP
Goldengorin, B; Sierksma, G; Turkensteen, M; Hromkovic, J; Nagl, M; Westfechtel, B
2004-01-01
In this paper we use arc tolerances, instead of arc costs, to improve Branch-and-Bound type algorithms for the Asymmetric Traveling Salesman Problem (ATSP). We derive new tighter lower bounds based on exact and approximate bottleneck upper tolerance values of the Assignment Problem (AP). It is shown
Analysis of energy-based algorithms for RNA secondary structure prediction
Directory of Open Access Journals (Sweden)
Hajiaghayi Monir
2012-02-01
Full Text Available Abstract Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA or pseudo-expected accuracy (pseudo-MEA methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms
Gao, Yang; Wang, Xuesong; Cheng, Yuhu; Wang, Z Jane
2015-08-01
To take full advantage of hyperspectral information, to avoid data redundancy and to address the curse of dimensionality concern, dimensionality reduction (DR) becomes particularly important to analyze hyperspectral data. Exploring the tensor characteristic of hyperspectral data, a DR algorithm based on class-aware tensor neighborhood graph and patch alignment is proposed here. First, hyperspectral data are represented in the tensor form through a window field to keep the spatial information of each pixel. Second, using a tensor distance criterion, a class-aware tensor neighborhood graph containing discriminating information is obtained. In the third step, employing the patch alignment framework extended to the tensor space, we can obtain global optimal spectral-spatial information. Finally, the solution of the tensor subspace is calculated using an iterative method and low-dimensional projection matrixes for hyperspectral data are obtained accordingly. The proposed method effectively explores the spectral and spatial information in hyperspectral data simultaneously. Experimental results on 3 real hyperspectral datasets show that, compared with some popular vector- and tensor-based DR algorithms, the proposed method can yield better performance with less tensor training samples required.
Directory of Open Access Journals (Sweden)
Tieyu Zhao
2015-01-01
Full Text Available The optical image encryption has attracted more and more researchers’ attention, and the various encryption schemes have been proposed. In existing optical cryptosystem, the phase functions or images are usually used as the encryption keys, and it is difficult that the traditional public-key algorithm (such as RSA, ECC, etc. is used to complete large numerical key transfer. In this paper, we propose a key distribution scheme based on the phase retrieval algorithm and the RSA public-key algorithm, which solves the problem for the key distribution in optical image encryption system. Furthermore, we also propose a novel image encryption system based on the key distribution principle. In the system, the different keys can be used in every encryption process, which greatly improves the security of the system.
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts
Directory of Open Access Journals (Sweden)
Alice Capobiango
2009-12-01
Full Text Available Os objetivos deste estudo foram avaliar a frequência de HPV anal em pacientes com neoplasia intraepitelial cervical (NIC, verificar a concordância entre os subtipos encontrados nos dois locais e investigar os fatores que influenciaram a ocorrência de HPV anal em mulheres com NIC sem evidências clínicas de imunodepressão. Foram avaliadas 52 mulheres com idades entre 16 e 72 anos e diagnóstico de neoplasia intraepitelial cervical graus I, II e III. A identificação do DNA (ácido desoxirribonucleico do HPV e de sete subtipos dos vírus foi realizada por meio da reação em cadeia da polimerase (PCR em material colhido no ânus e colo uterino. Foram pesquisados fatores que poderiam contribuir para a infecção anal, como paridade, número de parceiros, tabagismo, manipulação e coito anal e o tipo de doença ginecológica. Das 52 mulheres, foi diagnosticado HPV na região anal em 25 (48%, das quais 23 (44% também apresentavam HPV no colo uterino - resultado significativo para existência do HPV em portadoras de NIC. Em 16 (31% o HPV foi diagnosticado somente no colo uterino e em 11 (21% não foi identificado em colo ou ânus. Houve associação significativa nas variáveis paridade (p=0,02 e número de parceiros (p=0,04. Concluiu-se que: as mulheres com HPV genital têm mais probabilidade de serem acometidas por HPV anal; não há concordância unânime entre os subtipos do HPV do colo do útero e do ânus e a paridade e o número de parceiros contribuem para aumentar a incidência de HPV anal nas mulheres sem imunodeficiência e com HPV cervical.This study aims were to assess the frequency of HPV anal infection in patients with cervical intra-epithelial neoplasia (CIN, to find out the relation between the found subtypes, when present in both regions, and investigate factors that influenced the occurrence of anal HPV in women with CIN. Fifty two women with age between 16 and 72 years and cervical intra-epithelial neoplasia (CIN diagnosis
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Minimum Probability of Error-Based Equalization Algorithms for Fading Channels
Directory of Open Access Journals (Sweden)
Janos Levendovszky
2007-06-01
Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.
Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu
2018-06-01
An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.
An improved recommended algorithm for network structure based on two partial graphs
Directory of Open Access Journals (Sweden)
Deng Song
2017-08-01
Full Text Available In this thesis,we introduce an improved algorithm based on network structure.Based on the standard material diffusion algorithm,considering the influence of the user's score on the recommendation,the adjustment factor of the initial resource allocation vector and the resource transfer matrix in the recommendation algorithm is improved.Using the practical data set from GroupLens webite to evaluate the performance of the proposed algorithm,we performed a series of experiments.The experimental results reveal that it can yield better recommendation accuracy and has higher hitting rate than collaborative filtering,network-based inference.It can solve the problem of cold start and scalability in the standard material diffusion algorithm.And it also can make the recommendation results diversified.
Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander
2017-08-01
Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.
Boström, O; Fredriksson, R; Håland, Y; Jakobsson, L; Krafft, M; Lövsund, P; Muser, M H; Svensson, M Y
2000-03-01
Long-term whiplash associated disorders (WAD) 1-3 sustained in low velocity rear-end impacts is the most common disability injury in Sweden. Therefore, to determine neck injury mechanisms and develop methods to measure neck-injury related parameters are of importance for current crash-safety research. A new neck injury criterion (NIC) has previously been proposed and evaluated by means of dummy, human and mathematical rear-impact simulations. So far, the criterion appears to be sensitive to the major car and collision related risk factors for injuries with long-term consequences. To further evaluate the applicability of NIC, four seats were tested according to a recently proposed sled-test procedure. 'Good' as well as 'bad' seats were chosen on the basis of a recently presented disability risk ranking list. The dummy used in the current tests was the Biofidelic Rear Impact Dummy (BioRID). The results of this study showed that NICmax values were generally related to the real-world risk of long-term WAD 1-3. Furthermore, these results suggested that NICmax calculated from sled tests using the BioRID dummy can be used for evaluating the neck injury risk of different car seats.
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
Collaborative filtering recommendation model based on fuzzy clustering algorithm
Yang, Ye; Zhang, Yunhua
2018-05-01
As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.
Application of epidemic algorithms for smart grids control
International Nuclear Information System (INIS)
Krkoleva, Aleksandra
2012-01-01
Smart Grids are a new concept for electricity networks development, aiming to provide economically efficient and sustainable power system by integrating effectively the actions and needs of the network users. The thesis addresses the Smart Grids concept, with emphasis on the control strategies developed on the basis of epidemic algorithms, more specifically, gossip algorithms. The thesis is developed around three Smart grid aspects: the changed role of consumers in terms of taking part in providing services within Smart Grids; the possibilities to implement decentralized control strategies based on distributed algorithms; and information exchange and benefits emerging from implementation of information and communication technologies. More specifically, the thesis presents a novel approach for providing ancillary services by implementing gossip algorithms. In a decentralized manner, by exchange of information between the consumers and by making decisions on local level, based on the received information and local parameters, the group achieves its global objective, i. e. providing ancillary services. The thesis presents an overview of the Smart Grids control strategies with emphasises on new strategies developed for the most promising Smart Grids concepts, as Micro grids and Virtual power plants. The thesis also presents the characteristics of epidemic algorithms and possibilities for their implementation in Smart Grids. Based on the research on epidemic algorithms, two applications have been developed. These applications are the main outcome of the research. The first application enables consumers, represented by their commercial aggregators, to participate in load reduction and consequently, to participate in balancing market or reduce the balancing costs of the group. In this context, the gossip algorithms are used for aggregator's message dissemination for load reduction and households and small commercial and industrial consumers to participate in maintaining
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm
DEFF Research Database (Denmark)
Puchinger, Sven; Müelich, Sven; Mödinger, David
2017-01-01
We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....
Learning-based meta-algorithm for MRI brain extraction.
Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang
2011-01-01
Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.
Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M
2013-09-01
Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.
International Nuclear Information System (INIS)
Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.
2011-01-01
To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System
Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang
2018-03-01
Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Development of Base Transceiver Station Selection Algorithm for ...
African Journals Online (AJOL)
TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...
A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation
Directory of Open Access Journals (Sweden)
Guomin Li
2016-12-01
Full Text Available An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS algorithm, which belongs to the space-time block-coded (STBC category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme.
Algorithm for Wireless Sensor Networks Based on Grid Management
Directory of Open Access Journals (Sweden)
Geng Zhang
2014-05-01
Full Text Available This paper analyzes the key issues for wireless sensor network trust model and describes a method to build a wireless sensor network, such as the definition of trust for wireless sensor networks, computing and credibility of trust model application. And for the problem that nodes are vulnerable to attack, this paper proposed a grid-based trust algorithm by deep exploration trust model within the framework of credit management. Algorithm for node reliability screening and rotation schedule to cover parallel manner based on the implementation of the nodes within the area covered by trust. And analyze the results of the size of trust threshold has great influence on the safety and quality of coverage throughout the coverage area. The simulation tests the validity and correctness of the algorithm.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
A novel image encryption algorithm based on a 3D chaotic map
Kanso, A.; Ghebleh, M.
2012-07-01
Recently [Solak E, Çokal C, Yildiz OT Biyikoǧlu T. Cryptanalysis of Fridrich's chaotic image encryption. Int J Bifur Chaos 2010;20:1405-1413] cryptanalyzed the chaotic image encryption algorithm of [Fridrich J. Symmetric ciphers based on two-dimensional chaotic maps. Int J Bifur Chaos 1998;8(6):1259-1284], which was considered a benchmark for measuring security of many image encryption algorithms. This attack can also be applied to other encryption algorithms that have a structure similar to Fridrich's algorithm, such as that of [Chen G, Mao Y, Chui, C. A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos Soliton Fract 2004;21:749-761]. In this paper, we suggest a novel image encryption algorithm based on a three dimensional (3D) chaotic map that can defeat the aforementioned attack among other existing attacks. The design of the proposed algorithm is simple and efficient, and based on three phases which provide the necessary properties for a secure image encryption algorithm including the confusion and diffusion properties. In phase I, the image pixels are shuffled according to a search rule based on the 3D chaotic map. In phases II and III, 3D chaotic maps are used to scramble shuffled pixels through mixing and masking rules, respectively. Simulation results show that the suggested algorithm satisfies the required performance tests such as high level security, large key space and acceptable encryption speed. These characteristics make it a suitable candidate for use in cryptographic applications.
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2015-08-01
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Cloud Computing Task Scheduling Based on Cultural Genetic Algorithm
Directory of Open Access Journals (Sweden)
Li Jian-Wen
2016-01-01
Full Text Available The task scheduling strategy based on cultural genetic algorithm(CGA is proposed in order to improve the efficiency of task scheduling in the cloud computing platform, which targets at minimizing the total time and cost of task scheduling. The improved genetic algorithm is used to construct the main population space and knowledge space under cultural framework which get independent parallel evolution, forming a mechanism of mutual promotion to dispatch the cloud task. Simultaneously, in order to prevent the defects of the genetic algorithm which is easy to fall into local optimum, the non-uniform mutation operator is introduced to improve the search performance of the algorithm. The experimental results show that CGA reduces the total time and lowers the cost of the scheduling, which is an effective algorithm for the cloud task scheduling.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT
Directory of Open Access Journals (Sweden)
Cunsuo Pang
2016-09-01
Full Text Available This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT’s performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated pulse radar, SAR (Synthetic aperture radar, or ISAR (Inverse synthetic aperture radar, for improving the probability of target recognition.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Karatsuba-Ofman Multiplier with Integrated Modular Reduction for GF(2m
Directory of Open Access Journals (Sweden)
CUEVAS-FARFAN, E.
2013-05-01
Full Text Available In this paper a novel GF(2m multiplier based on Karatsuba-Ofman Algorithm is presented. A binary field multiplication in polynomial basis is typically viewed as a two steps process, a polynomial multiplication followed by a modular reduction step. This research proposes a modification to the original Karatsuba-Ofman Algorithm in order to integrate the modular reduction inside the polynomial multiplication step. Modular reduction is achieved by using parallel linear feedback registers. The new algorithm is described in detail and results from a hardware implementation on FPGA technology are discussed. The hardware architecture is described in VHDL and synthesized for a Virtex-6 device. Although the proposed field multiplier can be implemented for arbitrary finite fields, the targeted finite fields are recommended for Elliptic Curve Cryptography. Comparing other KOA multipliers, our proposed multiplier uses 36% less area resources and improves the maximum delay in 10%.
Directory of Open Access Journals (Sweden)
Chen Deyun
2013-01-01
Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.
K-Nearest Neighbor Intervals Based AP Clustering Algorithm for Large Incomplete Data
Directory of Open Access Journals (Sweden)
Cheng Lu
2015-01-01
Full Text Available The Affinity Propagation (AP algorithm is an effective algorithm for clustering analysis, but it can not be directly applicable to the case of incomplete data. In view of the prevalence of missing data and the uncertainty of missing attributes, we put forward a modified AP clustering algorithm based on K-nearest neighbor intervals (KNNI for incomplete data. Based on an Improved Partial Data Strategy, the proposed algorithm estimates the KNNI representation of missing attributes by using the attribute distribution information of the available data. The similarity function can be changed by dealing with the interval data. Then the improved AP algorithm can be applicable to the case of incomplete data. Experiments on several UCI datasets show that the proposed algorithm achieves impressive clustering results.
Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Lei Chen
2014-01-01
Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.
Size reduction of complex networks preserving modularity
Energy Technology Data Exchange (ETDEWEB)
Arenas, A.; Duch, J.; Fernandez, A.; Gomez, S.
2008-12-24
The ubiquity of modular structure in real-world complex networks is being the focus of attention in many trials to understand the interplay between network topology and functionality. The best approaches to the identification of modular structure are based on the optimization of a quality function known as modularity. However this optimization is a hard task provided that the computational complexity of the problem is in the NP-hard class. Here we propose an exact method for reducing the size of weighted (directed and undirected) complex networks while maintaining invariant its modularity. This size reduction allows the heuristic algorithms that optimize modularity for a better exploration of the modularity landscape. We compare the modularity obtained in several real complex-networks by using the Extremal Optimization algorithm, before and after the size reduction, showing the improvement obtained. We speculate that the proposed analytical size reduction could be extended to an exact coarse graining of the network in the scope of real-space renormalization.
A Cancer Gene Selection Algorithm Based on the K-S Test and CFS
Directory of Open Access Journals (Sweden)
Qiang Su
2017-01-01
Full Text Available Background. To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S test and correlation-based feature selection (CFS principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. Results. We adopted support vector machines (SVM as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR, and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. Conclusions. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
A grammar-based semantic similarity algorithm for natural language sentences.
Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.
A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning
Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei
2013-03-01
In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.
Huang, Shuqiang; Tao, Ming
2017-01-22
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems
Directory of Open Access Journals (Sweden)
Shuqiang Huang
2017-01-01
Full Text Available Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest and the population optimum (gbest; thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems
Huang, Shuqiang; Tao, Ming
2017-01-01
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint
Energy Technology Data Exchange (ETDEWEB)
Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili
2017-06-15
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.
Directory of Open Access Journals (Sweden)
Liuhui Zhao
2017-01-01
Full Text Available A shockwave-based speed harmonization algorithm for the longitudinal movement of automated vehicles is presented in this paper. In the advent of Connected/Automated Vehicle (C/AV environment, the proposed algorithm can be applied to capture instantaneous shockwaves constructed from vehicular speed profiles shared by individual equipped vehicles. With a continuous wavelet transform (CWT method, the algorithm detects abnormal speed drops in real-time and optimizes speed to prevent the shockwave propagating to the upstream traffic. A traffic simulation model is calibrated to evaluate the applicability and efficiency of the proposed algorithm. Based on 100% C/AV market penetration, the simulation results show that the CWT-based algorithm accurately detects abnormal speed drops. With the improved accuracy of abnormal speed drop detection, the simulation results also demonstrate that the congestion can be mitigated by reducing travel time and delay up to approximately 9% and 18%, respectively. It is also found that the shockwave caused by nonrecurrent congestion is quickly dissipated even with low market penetration.
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Directory of Open Access Journals (Sweden)
Zhehuang Huang
2015-01-01
Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Portfolio optimization by using linear programing models based on genetic algorithm
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
DNA-based watermarks using the DNA-Crypt algorithm
Directory of Open Access Journals (Sweden)
Barnekow Angelika
2007-05-01
Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm.
Heider, Dominik; Barnekow, Angelika
2007-05-29
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm
Heider, Dominik; Barnekow, Angelika
2007-01-01
Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434
Reactive power and voltage control based on general quantum genetic algorithms
DEFF Research Database (Denmark)
Vlachogiannis, Ioannis (John); Østergaard, Jacob
2009-01-01
This paper presents an improved evolutionary algorithm based on quantum computing for optima l steady-state performance of power systems. However, the proposed general quantum genetic algorithm (GQ-GA) can be applied in various combinatorial optimization problems. In this study the GQ-GA determines...... techniques such as enhanced GA, multi-objective evolutionary algorithm and particle swarm optimization algorithms, as well as the classical primal-dual interior-point optimal power flow algorithm. The comparison demonstrates the ability of the GQ-GA in reaching more optimal solutions....
CAS algorithm-based optimum design of PID controller in AVR system
International Nuclear Information System (INIS)
Zhu Hui; Li Lixiang; Zhao Ying; Guo Yu; Yang Yixian
2009-01-01
This paper presents a novel design method for determining the optimal PID controller parameters of an automatic voltage regulator (AVR) system using the chaotic ant swarm (CAS) algorithm. In the tuning process of parameters, the CAS algorithm is iterated to give the optimal parameters of the PID controller based on the fitness theory, where the position vector of each ant in the CAS algorithm corresponds to the parameter vector of the PID controller. The proposed CAS-PID controllers can ensure better control system performance with respect to the reference input in comparison with GA-PID controllers. Numerical simulations are provided to verify the effectiveness and feasibility of PID controller based on CAS algorithm.
An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM
Wang, Juan
2018-03-01
The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.
The Bouguer Correction Algorithm for Gravity with Limited Range
Directory of Open Access Journals (Sweden)
MA Jian
2017-01-01
Full Text Available The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simplified formula to calculate the Bouguer correction with limited range was proposed. The algorithm, which is innovative and has the value of mathematical theory to some extent, shows consistency with the equation evolved from the strict integral algorithm for topographic correction. The interpolation experiment shows that gravity reduction based on the Bouguer correction with limited range is prior to unlimited range when the calculation point is taller than 1000 m.
An extension theory-based maximum power tracker using a particle swarm optimization algorithm
International Nuclear Information System (INIS)
Chao, Kuei-Hsiang
2014-01-01
Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller
Fuzzy Sets-based Control Rules for Terminating Algorithms
Directory of Open Access Journals (Sweden)
Jose L. VERDEGAY
2002-01-01
Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.
The research of automatic speed control algorithm based on Green CBTC
Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi
2017-06-01
Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.
Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging
International Nuclear Information System (INIS)
Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco
2011-01-01
The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.
Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms
Directory of Open Access Journals (Sweden)
Christos Ttofis
2012-01-01
Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.
A Sustainable City Planning Algorithm Based on TLBO and Local Search
Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang
2017-09-01
Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.
Effectiveness of firefly algorithm based neural network in time series ...
African Journals Online (AJOL)
Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
The Heeger & Bergen Pyramid Based Texture Synthesis Algorithm
Directory of Open Access Journals (Sweden)
Thibaud Briand
2014-11-01
Full Text Available This contribution deals with the Heeger-Bergen pyramid-based texture analysis/synthesis algorithm. It brings a detailed explanation of the original algorithm tested on many characteristic examples. Our analysis reproduces the original results, but also brings a minor improvement concerning non-periodic textures. Inspired by visual perception theories, Heeger and Bergen proposed to characterize a texture by its first-order statistics of both its color and its responses to multiscale and multi-orientation filters, namely the steerable pyramid. The Heeger-Bergen algorithm consists in the following procedure: starting from a white noise image, histogram matchings are performed to the noise alternatively in both the image domain and steerable pyramid domain, so that the corresponding histograms match the ones of the input texture.
Energy Technology Data Exchange (ETDEWEB)
Kilkenny, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bell, P. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bradley, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bleuel, D. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Caggiano, J. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dewald, E. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kauffman, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schneider, M. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shaughnessy, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shelton, R. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Yeamans, C. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Batha, S. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grim, G. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herrmann, H. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Merrill, F. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leeper, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangster, T. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Edgell, D. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glebov, V. Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Regan, S. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Frenje, J. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gatu-Johnson, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Petrasso, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rindernecht, H. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zylstra, A. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cooper, G. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruiz, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-05
At the completion of the National Ignition Campaign NIF had about 36 different types of diagnostics. These were based on several decades of development on Nova and OMEGA and involved the whole US ICF community. A plan for a limited of NIF Diagnostics was documented by the Joint Central Diagnostic Team in the NIF Conceptual Design Report in 1994. These diagnostics and many more were installed diagnostics by two decades later. We give a short description of each of the 36 different types of NIC diagnostics grouped by the function of the diagnostics, namely target drive, target response and target assembly, stagnation and burn. A comparison of NIF diagnostics with the Nova diagnostics shows that the NIF diagnostic capability is broadly equivalent to that of Nova’s in 1999. NIF diagnostics have a much greater degree of automation and rigor than Nova’s and the NIF diagnostic suite incorporates some scientific innovation compared to Nova and OMEGA namely one much higher speed x-ray imager. Directions for future NIF diagnostics are discussed.
Flowbca : A flow-based cluster algorithm in Stata
Meekes, J.; Hassink, W.H.J.
In this article, we introduce the Stata implementation of a flow-based cluster algorithm written in Mata. The main purpose of the flowbca command is to identify clusters based on relational data of flows. We illustrate the command by providing multiple applications, from the research fields of
Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.
Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing
2017-06-12
Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.
PLASMA HEATING INSIDE INTERPLANETARY CORONAL MASS EJECTIONS BY ALFVÉNIC FLUCTUATIONS DISSIPATION
Energy Technology Data Exchange (ETDEWEB)
Li, Hui; Wang, Chi; Zhang, Lingqian [State Key Laboratory of Space Weather, National Space Science Center, CAS, Beijing, 100190 (China); He, Jiansen [School of Earth and Space Sciences, Peking University, Beijing, 100871 (China); Richardson, John D.; Belcher, John W. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA (United States); Tu, Cui, E-mail: hli@spaceweather.ac.cn [Laboratory of Near Space Environment, National Space Science Center, CAS, Beijing, 100190 (China)
2016-11-10
Nonlinear cascade of low-frequency Alfvénic fluctuations (AFs) is regarded as one of the candidate energy sources that heat plasma during the non-adiabatic expansion of interplanetary coronal mass ejections (ICMEs). However, AFs inside ICMEs were seldom reported in the literature. In this study, we investigate AFs inside ICMEs using observations from Voyager 2 between 1 and 6 au. It has been found that AFs with a high degree of Alfvénicity frequently occurred inside ICMEs for almost all of the identified ICMEs (30 out of 33 ICMEs) and for 12.6% of the ICME time interval. As ICMEs expand and move outward, the percentage of AF duration decays linearly in general. The occurrence rate of AFs inside ICMEs is much less than that in ambient solar wind, especially within 4.75 au. AFs inside ICMEs are more frequently presented in the center and at the boundaries of ICMEs. In addition, the proton temperature inside ICME has a similar “W”-shaped distribution. These findings suggest significant contribution of AFs on local plasma heating inside ICMEs.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir
2015-11-01
The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.
FPGA helix tracking algorithm for PANDA
Energy Technology Data Exchange (ETDEWEB)
Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David [II. Physikalisches Institut, University of Giessen (Germany); Ye, Hua [Institute of High Energy Physics, CAS (China); Collaboration: PANDA-Collaboration
2016-07-01
The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The VHDL-based algorithm is tested with different types of events, at different event rate. Furthermore, a study of T0 extraction from the tracking algorithm is performed. A concept of simultaneous tracking and T0 determination is presented.
A Novel Preferential Diffusion Recommendation Algorithm Based on User’s Nearest Neighbors
Directory of Open Access Journals (Sweden)
Fuguo Zhang
2017-01-01
Full Text Available Recommender system is a very efficient way to deal with the problem of information overload for online users. In recent years, network based recommendation algorithms have demonstrated much better performance than the standard collaborative filtering methods. However, most of network based algorithms do not give a high enough weight to the influence of the target user’s nearest neighbors in the resource diffusion process, while a user or an object with high degree will obtain larger influence in the standard mass diffusion algorithm. In this paper, we propose a novel preferential diffusion recommendation algorithm considering the significance of the target user’s nearest neighbors and evaluate it in the three real-world data sets: MovieLens 100k, MovieLens 1M, and Epinions. Experiments results demonstrate that the novel preferential diffusion recommendation algorithm based on user’s nearest neighbors can significantly improve the recommendation accuracy and diversity.
A novel line segment detection algorithm based on graph search
Zhao, Hong-dan; Liu, Guo-ying; Song, Xu
2018-02-01
To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).
Directory of Open Access Journals (Sweden)
Wilke MH
2011-12-01
savings of -4 days and ICUD reductions of -1.8 days. our algorithm contains recommendations for ABX onset (PCT ≥ 0.5 ng/ml, validation whether ABX is appropriate or not (Delta from day 2 to day 3 ≥ 30% indicates inappropriate ABX and recommendations for discontinuing ABX (PCT ≤ 0.25 ng/ml. We received 278, 264 episode datasets where we identified by computer-based selection 3, 263 cases with sepsis. After excluding cases with length of stay (LOS too short to achieve the intended savings, we ended with 1, 312 cases with ICUD and 268 cases without ICUD. Average length of stay of ICU-patients was 27.7 ± 25.7 days and for Non-ICU patients 17.5 ± 14.6 days respectively. ICU patients had an average of 8.8 ± 8.7 ICUD. After applying the simulation model on this population we calculated possible savings of € -1, 163, 000 for ICU-patients and € -36, 512 for Non-ICU patients. Discussion Our findings concerning the savings from the reduction of ABD are consistent with other publications. Savings ICUD had never been economically evaluated so far. our algorithm is able to possibly set a new standard in PCT-based ABX. However the findings are based on data modelling. The algorithm will be implemented in 5-10 hospitals in 2012 and effects in clinical reality measured 6 months after implementation. Conclusion Managing sepsis with daily monitoring of PCT using our refined algorithm is suitable to save substantial costs in hospitals. Implementation in clinical routine settings will show how much of the calculated effect will be achieved in reality.
A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences
Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952
A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences
Directory of Open Access Journals (Sweden)
Ming Che Lee
2014-01-01
Full Text Available This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.
Algorithmic Algebraic Combinatorics and Gröbner Bases
Klin, Mikhail; Jurisic, Aleksandar
2009-01-01
This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-07-18
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.
Cryptographic protocol security analysis based on bounded constructing algorithm
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.
Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors
Okuma, Takanori; Yasuura, Hiroto
2001-01-01
This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...
Hybrid fuzzy charged system search algorithm based state estimation in distribution networks
Directory of Open Access Journals (Sweden)
Sachidananda Prasad
2017-06-01
Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.
COOBBO: A Novel Opposition-Based Soft Computing Algorithm for TSP Problems
Directory of Open Access Journals (Sweden)
Qingzheng Xu
2014-12-01
Full Text Available In this paper, we propose a novel definition of opposite path. Its core feature is that the sequence of candidate paths and the distances between adjacent nodes in the tour are considered simultaneously. In a sense, the candidate path and its corresponding opposite path have the same (or similar at least distance to the optimal path in the current population. Based on an accepted framework for employing opposition-based learning, Oppositional Biogeography-Based Optimization using the Current Optimum, called COOBBO algorithm, is introduced to solve traveling salesman problems. We demonstrate its performance on eight benchmark problems and compare it with other optimization algorithms. Simulation results illustrate that the excellent performance of our proposed algorithm is attributed to the distinct definition of opposite path. In addition, its great strength lies in exploitation for enhancing the solution accuracy, not exploration for improving the population diversity. Finally, by comparing different version of COOBBO, another conclusion is that each successful opposition-based soft computing algorithm needs to adjust and remain a good balance between backward adjacent node and forward adjacent node.
Anàlisi de la situació dels residus d’aparells elèctrics i electrònics (RAEE) al Gironès
Terol Torres, Júlia; Ros Badosa, Ester; Barbero Bueno, Verónica
2007-01-01
Estudi de la situació dels residus d’aparells elèctrics i electrònics al Gironès reformulant la seva gestió actual, valorant si caldria fer una nova planta, així com una campanya de sensibilització social i construcció d’una possible planta intermediària de triatge de RAEE per a la comarca del Gironès
An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Aries Pratiarso
2015-06-01
Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.
GPU-based fast pencil beam algorithm for proton therapy
International Nuclear Information System (INIS)
Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya
2011-01-01
Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.
Kriging-based algorithm for nuclear reactor neutronic design optimization
International Nuclear Information System (INIS)
Kempf, Stephanie; Forget, Benoit; Hu, Lin-Wen
2012-01-01
Highlights: ► A Kriging-based algorithm was selected to guide research reactor optimization. ► We examined impacts of parameter values upon the algorithm. ► The best parameter values were incorporated into a set of best practices. ► Algorithm with best practices used to optimize thermal flux of concept. ► Final design produces thermal flux 30% higher than other 5 MW reactors. - Abstract: Kriging, a geospatial interpolation technique, has been used in the present work to drive a search-and-optimization algorithm which produces the optimum geometric parameters for a 5 MW research reactor design. The technique has been demonstrated to produce an optimal neutronic solution after a relatively small number of core calculations. It has additionally been successful in producing a design which significantly improves thermal neutron fluxes by 30% over existing reactors of the same power rating. Best practices for use of this algorithm in reactor design were identified and indicated the importance of selecting proper correlation functions.
Directory of Open Access Journals (Sweden)
Hyo Seon Park
2014-01-01
Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.