WorldWideScience

Sample records for assessment algorithms based

  1. A novel algorithm for computer based assessment

    OpenAIRE

    2012-01-01

    Student learning outcomes have been evaluated through graded assignments and tests by most paper-based assessment systems. But computer based assessments has the opportunity to improve the efficiency of assessments process. The use of internet is also made possible

  2. Genetic Algorithm Based Hybrid Fuzzy System for Assessing Morningness

    Directory of Open Access Journals (Sweden)

    Animesh Biswas

    2014-01-01

    Full Text Available This paper describes a real life case example on the assessment process of morningness of individuals using genetic algorithm based hybrid fuzzy system. It is observed that physical and mental performance of human beings in different time slots of a day are majorly influenced by morningness orientation of those individuals. To measure the morningness of people various self-reported questionnaires were developed by different researchers in the past. Among them reduced version of Morningness-Eveningness Questionnaire is mostly accepted. Almost all of the linguistic terms used in questionnaires are fuzzily defined. So, assessing them in crisp environments with their responses does not seem to be justifiable. Fuzzy approach based research works for assessing morningness of people are very few in the literature. In this paper, genetic algorithm is used to tune the parameters of a Mamdani fuzzy inference model to minimize error with their predicted outputs for assessing morningness of people.

  3. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    Science.gov (United States)

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  4. Risk Assessment Algorithms Based On Recursive Neural Networks

    CERN Document Server

    De Lara, Alejandro Chinea Manrique

    2007-01-01

    The assessment of highly-risky situations at road intersections have been recently revealed as an important research topic within the context of the automotive industry. In this paper we shall introduce a novel approach to compute risk functions by using a combination of a highly non-linear processing model in conjunction with a powerful information encoding procedure. Specifically, the elements of information either static or dynamic that appear in a road intersection scene are encoded by using directed positional acyclic labeled graphs. The risk assessment problem is then reformulated in terms of an inductive learning task carried out by a recursive neural network. Recursive neural networks are connectionist models capable of solving supervised and non-supervised learning problems represented by directed ordered acyclic graphs. The potential of this novel approach is demonstrated through well predefined scenarios. The major difference of our approach compared to others is expressed by the fact of learning t...

  5. Risk Assessment Algorithms Based On Recursive Neural Networks

    OpenAIRE

    2007-01-01

    The assessment of highly-risky situations at road intersections have been recently revealed as an important research topic within the context of the automotive industry. In this paper we shall introduce a novel approach to compute risk functions by using a combination of a highly non-linear processing model in conjunction with a powerful information encoding procedure. Specifically, the elements of information either static or dynamic that appear in a road intersection scene are encoded by us...

  6. Analysis of family health history based risk assessment algorithms: classification and data requirements.

    Science.gov (United States)

    Ranade-Kharkar, Pallavi; Del Fiol, Guilherme; Williams, Janet L; Hulse, Nathan C; Haug, Peter

    2013-01-01

    Family Health History (FHH) is a valuable and potentially low-cost tool for risk assessment and diagnosis in patient-centered healthcare. In this study, we identified and analyzed existing FHH-based risk assessment algorithms (RAAs) for cardio-vascular disease (CVD) and colorectal cancer (CRC) to guide implementers of electronic health record (EHR) systems regarding the data requirements for computing risk using these algorithms. We found a core set of data elements that are required by most RAAs. While some of these data are available in EHR systems, the patients can be empowered to contribute the remainder.

  7. Risk Assessment for Bridges Safety Management during Operation Based on Fuzzy Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Xia Hanyu

    2016-01-01

    Full Text Available In recent years, large span and large sea-crossing bridges are built, bridges accidents caused by improper operational management occur frequently. In order to explore the better methods for risk assessment of the bridges operation departments, the method based on fuzzy clustering algorithm is selected. Then, the implementation steps of fuzzy clustering algorithm are described, the risk evaluation system is built, and Taizhou Bridge is selected as an example, the quantitation of risk factors is described. After that, the clustering algorithm based on fuzzy equivalence is calculated on MATLAB 2010a. In the last, Taizhou Bridge operation management departments are classified and sorted according to the degree of risk, and the safety situation of operation departments is analyzed.

  8. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  9. A Novel Dynamic Algorithm for IT Outsourcing Risk Assessment Based on Transaction Cost Theory

    Directory of Open Access Journals (Sweden)

    Guodong Cong

    2015-01-01

    Full Text Available With the great risk exposed in IT outsourcing, how to assess IT outsourcing risk becomes a critical issue. However, most of approaches to date need to further adapt to the particular complexity of IT outsourcing risk for either falling short in subjective bias, inaccuracy, or efficiency. This paper proposes a dynamic algorithm of risk assessment. It initially forwards extended three layers (risk factors, risks, and risk consequences of transferring mechanism based on transaction cost theory (TCT as the framework of risk analysis, which bridges the interconnection of components in three layers with preset transferring probability and impact. Then, it establishes an equation group between risk factors and risk consequences, which assures the “attribution” more precisely to track the specific sources that lead to certain loss. Namely, in each phase of the outsourcing lifecycle, both the likelihood and the loss of each risk factor and those of each risk are acquired through solving equation group with real data of risk consequences collected. In this “reverse” way, risk assessment becomes a responsive and interactive process with real data instead of subjective estimation, which improves the accuracy and alleviates bias in risk assessment. The numerical case proves the effectiveness of the algorithm compared with the approach forwarded by other references.

  10. A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Zhongbin Wang

    2016-01-01

    Full Text Available In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN and support vector machine (SVM methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others.

  11. Slope orientation assessment for open-pit mines, using GIS-based algorithms

    Science.gov (United States)

    Grenon, Martin; Laflamme, Amélie-Julie

    2011-09-01

    Standard stability analysis in geomechanical rock slope engineering for open-pit mines relies on a simplified representation of slope geometry, which does not take full advantage of available topographical data in the early design stages of a mining project; consequently, this may lead to nonoptimal slope design. The primary objective of this paper is to present a methodology that allows for the rigorous determination of interramp and bench face slope orientations on a digital elevation model (DEM) of a designed open pit. Common GIS slope algorithms were tested to assess slope orientations on the DEM of the Meadowbank mining project's Portage pit. Planar regression algorithms based on principal component analysis provided the best results at both the interramp and the bench face levels. The optimal sampling window for interramp was 21×21 cells, while a 9×9-cell window was best at the bench level. Subsequent slope stability analysis relying on those assessed slope orientations would provide a more realistic geometry for potential slope instabilities in the design pit. The presented methodology is flexible, and can be adapted depending on a given mine's block sizes and pit geometry.

  12. A Method for Streamlining and Assessing Sound Velocity Profiles Based on Improved D-P Algorithm

    Science.gov (United States)

    Zhao, D.; WU, Z. Y.; Zhou, J.

    2015-12-01

    A multi-beam system transmits sound waves and receives the round-trip time of their reflection or scattering, and thus it is possible to determine the depth and coordinates of the detected targets using the sound velocity profile (SVP) based on Snell's Law. The SVP is determined by a device. Because of the high sampling rate of the modern device, the operational time of ray tracing and beam footprint reduction will increase, lowering the overall efficiency. To promote the timeliness of multi-beam surveys and data processing, redundant points in the original SVP must be screened out and at the same time, errors following the streamlining of the SVP must be evaluated and controlled. We presents a new streamlining and evaluation method based on the Maximum Offset of sound Velocity (MOV) algorithm. Based on measured SVP data, this method selects sound velocity data points by calculating the maximum distance to the sound-velocity-dimension based on an improved Douglas-Peucker Algorithm to streamline the SVP (Fig. 1). To evaluate whether the streamlined SVP meets the desired accuracy requirements, this method is divided into two parts: SVP streamlining, and an accuracy analysis of the multi-beam sounding data processing using the streamlined SVP. Therefore, the method is divided into two modules: the streamlining module and the evaluation module (Fig. 2). The streamlining module is used for streamlining the SVP. Its core is the MOV algorithm.To assess the accuracy of the streamlined SVP, we uses ray tracing and the percentage error analysis method to evaluate the accuracy of the sounding data both before and after streamlining the SVP (Fig. 3). By automatically optimizing the threshold, the reduction rate of sound velocity profile data can reach over 90% and the standard deviation percentage error of sounding data can be controlled to within 0.1% (Fig. 4). The optimized sound velocity profile data improved the operational efficiency of the multi-beam survey and data post

  13. Screw Performance Degradation Assessment Based on Quantum Genetic Algorithm and Dynamic Fuzzy Neural Network

    Directory of Open Access Journals (Sweden)

    Xiaochen Zhang

    2015-01-01

    Full Text Available To evaluate the performance of ball screw, screw performance degradation assessment technology based on quantum genetic algorithm (QGA and dynamic fuzzy neural network (DFNN is studied. The ball screw of the CINCINNATIV5-3000 machining center is treated as the study object. Two Kistler 8704B100M1 accelerometers and a Kistler 8765A250M5 three-way accelerometer are installed to monitor the degradation trend of screw performance. First, screw vibration signal features are extracted both in time domain and frequency domain. Then the feature vectors can be obtained by principal component analysis (PCA. Second, the initialization parameters of the DFNN are optimized by means of QGA. Finally, the feature vectors are inputted to DFNN for training and then get the screw performance degradation model. The experiment results show that the screw performance degradation model could effectively evaluate the performance of NC machine screw.

  14. ITAC volume assessment through a Gaussian hidden Markov random field model-based algorithm.

    Science.gov (United States)

    Passera, Katia M; Potepan, Paolo; Brambilla, Luca; Mainardi, Luca T

    2008-01-01

    In this paper, a semi-automatic segmentation method for volume assessment of Intestinal-type adenocarcinoma (ITAC) is presented and validated. The method is based on a Gaussian hidden Markov random field (GHMRF) model that represents an advanced version of a finite Gaussian mixture (FGM) model as it encodes spatial information through the mutual influences of neighboring sites. To fit the GHMRF model an expectation maximization (EM) algorithm is used. We applied the method to a magnetic resonance data sets (each of them composed by T1-weighted, Contrast Enhanced T1-weighted and T2-weighted images) for a total of 49 tumor-contained slices. We tested GHMRF performances with respect to FGM by both a numerical and a clinical evaluation. Results show that the proposed method has a higher accuracy in quantifying lesion area than FGM and it can be applied in the evaluation of tumor response to therapy.

  15. iDensity: an automatic Gabor filter-based algorithm for breast density assessment

    Science.gov (United States)

    Gamdonkar, Ziba; Tay, Kevin; Ryder, Will; Brennan, Patrick C.; Mello-Thoms, Claudia

    2015-03-01

    Abstract Although many semi-automated and automated algorithms for breast density assessment have been recently proposed, none of these have been widely accepted. In this study a novel automated algorithm, named iDensity, inspired by the human visual system is proposed for classifying mammograms into four breast density categories corresponding to the Breast Imaging Reporting and Data System (BI-RADS). For each BI-RADS category 80 cases were taken from the normal volumes of the Digital Database for Screening Mammography (DDSM). For each case only the left medio-lateral oblique was utilized. After image calibration using the provided tables of each scanner in the DDSM, the pectoral muscle and background were removed. Images were filtered by a median filter and down sampled. Images were then filtered by a filter bank consisting of Gabor filters in six orientations and 3 scales, as well as a Gaussian filter. Three gray level histogram-based features and three second order statistics features were extracted from each filtered image. Using the extracted features, mammograms were separated initially separated into two groups, low or high density, then in a second stage, the low density group was subdivided into BI-RADS I or II, and the high density group into BI-RADS III or IV. The algorithm achieved a sensitivity of 95% and specificity of 94% in the first stage, sensitivity of 89% and specificity of 95% when classifying BIRADS I and II cases, and a sensitivity of 88% and 91% specificity when classifying BI-RADS III and IV.

  16. Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment

    Directory of Open Access Journals (Sweden)

    Garima Singh

    2011-01-01

    Full Text Available With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN has been proposed for voltage stability margin estimation which is an indication of the power system's proximity to voltage collapse. The proposed approach utilizes a hybrid algorithm that integrates genetic algorithm and the back propagation neural network. The proposed algorithm aims to combine the capacity of GAs in avoiding local minima and at the same time fast execution of the BP algorithm. Input features for GABPNN are selected on the basis of angular distance-based clustering technique. The performance of the proposed GABPNN approach has been compared with the most commonly used gradient based BP neural network by estimating the voltage stability margin at different loading conditions in 6-bus and IEEE 30-bus system. GA based neural network learns faster, at the same time it provides more accurate voltage stability margin estimation as compared to that based on BP algorithm. It is found to be suitable for online applications in energy management systems.

  17. Hyaluronic acid algorithm-based models for assessment of liver ifbrosis:translation from basic science to clinical application

    Institute of Scientific and Technical Information of China (English)

    Zeinab Babaei; Hadi Parsian

    2016-01-01

    BACKGROUND: The estimation of liver ifbrosis is usually dependent on liver biopsy evaluation. Because of its disad-vantages and side effects, researchers try to ifnd non-invasive methods for the assessment of liver injuries. Hyaluronic acid has been proposed as an index for scoring the severity of if-brosis, alone or in algorithm models. The algorithm model in which hyaluronic acid was used as a major constituent was more reliable and accurate in diagnosis than hyaluronic acid alone. This review described various hyaluronic acid algo-rithm-based models for assessing liver ifbrosis. DATA SOURCE: A PubMed database search was performed to identify the articles relevant to hyaluronic acid algorithm-based models for estimating liver ifbrosis. RESULT: The use of hyaluronic acid in an algorithm model is an extra and valuable tool for assessing liver ifbrosis. CONCLUSIONS: Although hyaluronic acid algorithm-based models have good diagnostic power in liver ifbrosis assess-ment, they cannot render the need for liver biopsy obsolete and it is better to use them in parallel with liver biopsy. They can be used when frequent liver biopsy is not possible in situa-tions such as highlighting the efifcacy of treatment protocol for liver ifbrosis.

  18. Implementation of Vision-based Object Tracking Algorithms for Motor Skill Assessments

    Directory of Open Access Journals (Sweden)

    Beatrice Floyd

    2015-06-01

    Full Text Available Assessment of upper extremity motor skills often involves object manipulation, drawing or writing using a pencil, or performing specific gestures. Traditional assessment of such skills usually requires a trained person to record the time and accuracy resulting in a process that can be labor intensive and costly. Automating the entire assessment process will potentially lower the cost, produce electronically recorded data, broaden the implementations, and provide additional assessment infor-mation. This paper presents a low-cost, versatile, and easy-to-use algorithm to automatically detect and track single or multiple well-defined geometric shapes or markers. It therefore can be applied to a wide range of assessment protocols that involve object manipulation or hand and arm gestures. The algorithm localizes the objects using color thresholding and morphological operations and then estimates their 3-dimensional pose. The utility of the algorithm is demonstrated by implementing it for automating the following five protocols: the sport of Cup Stacking, the Soda Pop Coordination test, the Wechsler Block Design test, the visual-motor integration test, and gesture recognition.

  19. Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment

    OpenAIRE

    Garima Singh; Laxmi Srivastava

    2011-01-01

    With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN...

  20. An overview of computer algorithms for deconvolution-based assessment of in vivo neuroendocrine secretory events.

    Science.gov (United States)

    Veldhuis, J D; Johnson, M L

    1990-06-01

    The availability of increasingly efficient computational systems has made feasible the otherwise burdensome analysis of complex neurobiological data, such as in vivo neuroendocrine glandular secretory activity. Neuroendocrine data sets are typically sparse, noisy and generated by combined processes (such as secretion and metabolic clearance) operating simultaneously over both short and long time spans. The concept of a convolution integral to describe the impact of two or more processes acting jointly has offered an informative mathematical construct with which to dissect (deconvolve) specific quantitative features of in vivo neuroendocrine phenomena. Appropriate computer-based deconvolution algorithms are capable of solving families of 100-300 simultaneous integral equations for a large number of secretion and/or clearance parameters of interest. For example, one application of computer technology allows investigators to deconvolve the number, amplitude and duration of statistically significant underlying secretory episodes of algebraically specifiable waveform and simultaneously estimate subject- and condition-specific neurohormone metabolic clearance rates using all observed data and their experimental variances considered simultaneously. Here, we will provide a definition of selected deconvolution techniques, review their conceptual basis, illustrate their applicability to biological data and discuss new perspectives in the arena of computer-based deconvolution methodologies for evaluating complex biological events.

  1. Flow Based Algorithm

    Directory of Open Access Journals (Sweden)

    T. Karpagam

    2012-01-01

    Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.

  2. A genetic algorithm approach for assessing soil liquefaction potential based on reliability method

    Indian Academy of Sciences (India)

    M H Bagheripour; I Shooshpasha; M Afzalirad

    2012-02-01

    Deterministic approaches are unable to account for the variations in soil’s strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction (), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.

  3. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  4. ASSESSMENT OF THE SFIM ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    XU Han-qiu

    2004-01-01

    Fusion of images with different spatial and spectral resolutions can improve the visualization of the images. Many fusion techniques have been developed to improve the spectral fidelity and/or spatial texture quality of fused imagery. Of them, a recently proposed algorithm, the SF1M (Smoothing Filter-based Intensity Modulation), is known for its high spectral fidelity and simplicity. However, the study and evaluation of the algorithm were only based on spectral and spatial criteria. Therefore, this paper aims to further study the classification accuracy of the SFIM-fused imagery. Three other simple fusion algorithms, High-Pass Filter (HPF), Multiplication (MLT), and Modified Brovey (MB), have been employed for further evaluation of the SFIM. The study is based on a Landsat-7 ETM+sub-scene covering the urban fringe of southeastern Fuzhou City of China. The effectiveness of the algorithm has been evaluated on the basis of spectral fidelity, high spatial frequency information absorption, and classification accuracy.The study reveals that the difference in smoothing filter kernel sizes used in producing the SFIM-fused images can affect the classification accuracy. Compared with three other algorithms, the SFIM transform is the best method in retaining spectral information of the original image and in getting best classification results.

  5. Assessment of Groundwater Potential Based on Multicriteria Decision Making Model and Decision Tree Algorithms

    Directory of Open Access Journals (Sweden)

    Huajie Duan

    2016-01-01

    Full Text Available Groundwater plays an important role in global climate change and satisfying human needs. In the study, RS (remote sensing and GIS (geographic information system were utilized to generate five thematic layers, lithology, lineament density, topology, slope, and river density considered as factors influencing the groundwater potential. Then, the multicriteria decision model (MCDM was integrated with C5.0 and CART, respectively, to generate the decision tree with 80 surveyed tube wells divided into four classes on the basis of the yield. To test the precision of the decision tree algorithms, the 10-fold cross validation and kappa coefficient were adopted and the average kappa coefficient for C5.0 and CART was 90.45% and 85.09%, respectively. After applying the decision tree to the whole study area, four classes of groundwater potential zones were demarcated. According to the classification result, the four grades of groundwater potential zones, “very good,” “good,” “moderate,” and “poor,” occupy 4.61%, 8.58%, 26.59%, and 60.23%, respectively, with C5.0 algorithm, while occupying the percentages of 4.68%, 10.09%, 26.10%, and 59.13%, respectively, with CART algorithm. Therefore, we can draw the conclusion that C5.0 algorithm is more appropriate than CART for the groundwater potential zone prediction.

  6. Decision Support and Web-based Implementation of Algorithms for the Ecological Assessment of Pesticides

    Science.gov (United States)

    The EPA registers pesticides for use in the US and approves imported pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Before a pesticide can be registered, the EPA must assess whether the pesticide can be used without being harmful to humans or po...

  7. Applying probability theory for the quality assessment of a wildfire spread prediction framework based on genetic algorithms.

    Science.gov (United States)

    Cencerrado, Andrés; Cortés, Ana; Margalef, Tomàs

    2013-01-01

    This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.

  8. Applying Probability Theory for the Quality Assessment of a Wildfire Spread Prediction Framework Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Andrés Cencerrado

    2013-01-01

    Full Text Available This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.

  9. An enhanced real-time forest fire assessment algorithm based on video by using texture analysis

    Directory of Open Access Journals (Sweden)

    Gudikandhula Narasimha Rao

    2016-09-01

    Full Text Available As the human technology moved further, the risk of natural and man induced sudden damage increase exponentially. One of the most dangerous disasters is fire. In addition to its direct danger on human's lives, fire consumes forests where trees that provide humans with oxygen are destroyed. Every year, the large number of wildfires happening all over the world they burn forested lands, causing adverse ecological and social impacts. Early warning and immediate responses are the only ways to avoid such type of disasters. This work describes a naïve method is used to detect flames in forest by using a Spatio Wildfire Prediction and Monitoring System (SWPMS. Basically, the fired information retrieving from regions by using background subtraction and colour analysis. The fire behaviour is modelled by texture analysis using computer vision systems. The Central Server should receives fired regions from the volunteer's smart phone and use fired location coordinates, different angles of smart phone receives fired locations based on Google Earth API. Finally, Kalman filter estimator computes the position vector of a moving object. Antennas or Satellite systems are grasping information from fire regions then GIS will be analyzed those regions and send alert to local peoples of forest regions and NDRF team.

  10. HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM

    Directory of Open Access Journals (Sweden)

    Narendran Rajagopalan

    2012-01-01

    Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.

  11. An integrated risk assessment model of township-scaled land subsidence based on an evidential reasoning algorithm and fuzzy set theory.

    Science.gov (United States)

    Chen, Yu; Shu, Longcang; Burbey, Thomas J

    2014-04-01

    Land subsidence risk assessment (LSRA) is a multi-attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor-level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster-Shafer (D-S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi-Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information--quantitative or qualitative, complete or incomplete, and precise or imprecise--to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.

  12. ECG quality assessment based on a kernel support vector machine and genetic algorithm with a feature matrix

    Institute of Scientific and Technical Information of China (English)

    Ya-tao ZHANG; Cheng-yu LIU; Shou-shui WEI; Chang-zhi WEI; Fei-fei LIU

    2014-01-01

    We propose a systematic ECG quality classification method based on a kernel support vector machine (KSVM) and genetic algorithm (GA) to determine whether ECGs collected via mobile phone are acceptable or not. This method includes mainly three modules, i.e., lead-fall detection, feature extraction, and intelligent classification. First, lead-fall detection is executed to make the initial classification. Then the power spectrum, baseline drifts, amplitude difference, and other time-domain features for ECGs are analyzed and quantified to form the feature matrix. Finally, the feature matrix is assessed using KSVM and GA to determine the ECG quality classification results. A Gaussian radial basis function (GRBF) is employed as the kernel function of KSVM and its performance is compared with that of the Mexican hat wavelet function (MHWF). GA is used to determine the optimal parameters of the KSVM classifier and its performance is compared with that of the grid search (GS) method. The performance of the proposed method was tested on a database from PhysioNet/Computing in Cardiology Challenge 2011, which includes 1500 12-lead ECG recordings. True positive (TP), false positive (FP), and classification accuracy were used as the assessment indices. For training database set A (1000 recordings), the optimal results were obtained using the combination of lead-fall, GA, and GRBF methods, and the corresponding results were:TP 92.89%, FP 5.68%, and classification accuracy 94.00%. For test database set B (500 recordings), the optimal results were also obtained using the combination of lead-fall, GA, and GRBF methods, and the classification accuracy was 91.80%.

  13. Variables Bounding Based Retiming Algorithm

    Institute of Scientific and Technical Information of China (English)

    宫宗伟; 林争辉; 陈后鹏

    2002-01-01

    Retiming is a technique for optimizing sequential circuits. In this paper, wediscuss this problem and propose an improved retiming algorithm based on variables bounding.Through the computation of the lower and upper bounds on variables, the algorithm can signi-ficantly reduce the number of constraints and speed up the execution of retiming. Furthermore,the elements of matrixes D and W are computed in a demand-driven way, which can reducethe capacity of memory. It is shown through the experimental results on ISCAS89 benchmarksthat our algorithm is very effective for large-scale sequential circuits.

  14. A USABILITY ASSESSMENT OF AN EPUB 3.0 BASED EBOOK DEVELOPED FOR ALGORITHMS AND PROGRAMMING COURSE

    Directory of Open Access Journals (Sweden)

    Gürcan Çetin

    2017-04-01

    Full Text Available The digital self-learning among students is becoming increasingly popular with the widespread use of mobile devices. In this context, many academic institutions generate digital learning sources for their learners. EBooks offer new opportunities in the matter of the visualization of educational materials, access them from any place at any time. It is inevitable to use interactive eBooks in engineering education as well. This paper presents an eBook for students studying in Information System Engineering. It has been aimed to enhance algorithmic skills of students. Moreover, in this article, we have evaluated the usability of this eBook. The results of research show how the students assessed the interface quality, information quality and overall satisfaction of our eBook. The research was conducted by using a questionnaire method.

  15. A New Unsupervised Pre-processing Algorithm Based on Artificial Immune System for ERP Assessment in a P300-based GKT

    Directory of Open Access Journals (Sweden)

    S. Shojaeilangari

    2012-09-01

    Full Text Available In recent years, an increasing number of researches have been focused on bio-inspired algorithms to solve the elaborate engineering problems. Artificial Immune System (AIS is an artificial intelligence technique which has potential of solving problems in various fields. The immune system, due to self-regulating nature, has been an inspiration source of unsupervised learning methods for pattern recognition task. The purpose of this study is to apply the AIS to pre-process the lie-detection dataset to promote the recognition of guilty and innocent subjects. A new Unsupervised AIS (UAIS was proposed in this study as a pre-processing method before classification. Then, we applied three different classifiers on pre-processed data for Event Related Potential (ERP assessment in a P300-based Guilty Knowledge Test (GKT. Experiment results showed that UAIS is a successful pre-processing method which is able to improve the classification rate. In our experiments, we observed that the classification accuracies for three different classifiers: K-Nearest Neighbourhood (KNN, Support Vector Machine (SVM and Linear Discriminant Analysis (LDA were increased after applying UAIS pre-processing. Using of scattering criterion to assessment the features before and after pre-processing proved that our proposed method was able to perform data mapping from a primary feature space to a new area where the data separability was improved significantly.

  16. AN ALGORITHM FOR THE ASSESSMENT OF SUBJECTIVE ADAPTIVE THERMAL COMFORT CONDITIONS BASED ON MULTI-AGENT SYSTEMS

    Directory of Open Access Journals (Sweden)

    C. Marino

    2011-10-01

    Full Text Available Thermal comfort conditions in the built environment are strictly related not only to the thermal and geometric building features and to air-conditioning systems, but also to the building destination, to its using profile and to the biological-metabolic-psychological characteristics of users. As a consequence, there is a strong claim for new models, both subjective and adaptive to the environment, in aparticular holistic vision of the problem, with regards to the novel tern user-plant-building system. In such a frame, through the paper the characterization of an algorithm aimed at subjective adaptive thermal comfort evaluations, enriching the one proposed by Fanger with an adaptive approach, is carried out using a Multi Agent System (MAS, based on Intelligent Agents, able to follow user’s needs and preferences in different contexts and expectations.

  17. Evolutionary algorithm based index assignment algorithm for noisy channel

    Institute of Scientific and Technical Information of China (English)

    李天昊; 余松煜

    2004-01-01

    A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error.

  18. An evidence gathering and assessment technique designed for a forest cover classification algorithm based on the Dempster-Shafer theory of evidence

    Science.gov (United States)

    Szymanski, David Lawrence

    This thesis presents a new approach for classifying Landsat 5 Thematic Mapper (TM) imagery that utilizes digitally represented, non-spectral data in the classification step. A classification algorithm that is based on the Dempster-Shafer theory of evidence is developed and tested for its ability to provide an accurate representation of forest cover on the ground at the Anderson et al (1976) level II. The research focuses on defining an objective, systematic method of gathering and assessing the evidence from digital sources including TM data, the normalized difference vegetation index, soils, slope, aspect, and elevation. The algorithm is implemented using the ESRI ArcView Spatial Analyst software package and the Grid spatial data structure with software coded in both ArcView Avenue and also C. The methodology uses frequency of occurrence information to gather evidence and also introduces measures of evidence quality that quantify the ability of the evidence source to differentiate the Anderson forest cover classes. The measures are derived objectively and empirically and are based on common principles of legal argument. The evidence assessment measures augment the Dempster-Shafer theory and the research will determine if they provide an argument that is mentally sound, credible, and consistent. This research produces a method for identifying, assessing, and combining evidence sources using the Dempster-Shafer theory that results in a classified image containing the Anderson forest cover class. Test results indicate that the new classifier performs with accuracy that is similar to the traditional maximum likelihood approach. However, confusion among the deciduous and mixed classes remains. The utility of the evidence gathering method and also the evidence assessment method is demonstrated and confirmed. The algorithm presents an operational method of using the Dempster-Shafer theory of evidence for forest classification.

  19. Application of detecting algorithm based on network

    Institute of Scientific and Technical Information of China (English)

    张凤斌; 杨永田; 江子扬; 孙冰心

    2004-01-01

    Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.

  20. Diversity-Based Boosting Algorithm

    Directory of Open Access Journals (Sweden)

    Jafar A. Alzubi

    2016-05-01

    Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.

  1. Risk Assessment Framework and Algorithm of Power Systems Based on the Partitioned Multi-objective Risk Method

    Institute of Scientific and Technical Information of China (English)

    XIE Shaoyu; WANG Xiuli; WANG Xifan

    2011-01-01

    The average risk indices, such as the loss of load expectation (LOLE) and expected demand not supplied (EDNS), have been widely used in risk assessment of power systems. However, the average indices can't distinguish between the events of low probability but high damage and the events of high probability but low damage. In order to ov+rcome these shortcomings, this paper proposes an extended risk analysis framework for the power system based on the partitioned multi-objective risk method (PMRM).

  2. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    Science.gov (United States)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  3. A Real-Time Location-Based Services System Using WiFi Fingerprinting Algorithm for Safety Risk Assessment of Workers in Tunnels

    Directory of Open Access Journals (Sweden)

    Peng Lin

    2014-01-01

    Full Text Available This paper investigates the feasibility of a real-time tunnel location-based services (LBS system to provide workers’ safety protection and various services in concrete dam site. In this study, received signal strength- (RSS- based location using fingerprinting algorithm and artificial neural network (ANN risk assessment is employed for position analysis. This tunnel LBS system achieves an online, real-time, intelligent tracking identification feature, and the on-site running system has many functions such as worker emergency call, track history, and location query. Based on ANN with a strong nonlinear mapping, and large-scale parallel processing capabilities, proposed LBS system is effective to evaluate the risk management on worker safety. The field implementation shows that the proposed location algorithm is reliable and accurate (3 to 5 meters enough for providing real-time positioning service. The proposed LBS system is demonstrated and firstly applied to the second largest hydropower project in the world, to track workers on tunnel site and assure their safety. The results show that the system is simple and easily deployed.

  4. An IOT Security Risk Autonomic Assessment Algorithm

    Directory of Open Access Journals (Sweden)

    Zhengchao Ma

    2013-02-01

    Full Text Available In terms of Internet of Things (IOT system with the possibility criterion of fuzziness and randomness security risk, we qualitatively analyze the security risk level of IOT security scene by describing generalization metrics the potential impact and likelihood of occurrence of every major threat scenarios. On this basis, we proposed self-assessment algorithm of IOT security risk, adopting three-dimensional normal cloud model integrated consideration of risk indicators, researching the multi-rule mapping relationship between the qualitative input of safety indicators and the quantitative reasoning of self-assessment. Finally, we build security risk assessment simulation platform, and verify the validity and accuracy of the algorithm in the premise of substantiating the risk level and the safety criterion domain.

  5. 应用无监督聚类算法评估电网安全水平%Power System Security Assessment Based on Unsupervised Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    王同文; 管霖

    2011-01-01

    提出一种基于聚类算法的电网安全评估新思路.以关键稳态状态量为输入,应用聚类算法提取样本空间分布知识,利用所获知识实现系统稳定水平评估.聚类算法以样本为起点构造子空间,不断扩展子空间以获得包含数据分布结构的最优子空间,最优子空间的聚合构成聚类结果,并以类边界样本展示训练集空间分布结构.算法对数据形状适应性强,适合增量式数据集的挖掘.在IEEE两个测试系统上的应用结果证实所提电网安全评估思路的有效性.%A new method for power system security assessment based on the unsupervised clustering algorithm is proposed in the paper. Using the clustering algorithm extract the knowledge of spatial distribution of the sample which is used to achieve system stability assessment with the steady state variables as inputs. Clustering algorithm to construct the subspace as a starting point of sample, and extend the subspace in order to obtain the optimal subspace which contain the data distribution structure, and the aggregation of optimal subspace constitute the clustering results, and use the boundary samples to show the spatial distribution structure of the training set. The approach is adaptive to the different data shapes and incremental data set conveniently, application results on the two IEEE testing systems can confirm the effectiveness of new method.

  6. AN SVAD ALGORITHM BASED ON FNNKD METHOD

    Institute of Scientific and Technical Information of China (English)

    Chen Dong; Zhang Yan; Kuang Jingming

    2002-01-01

    The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.

  7. A New Page Ranking Algorithm Based On WPRVOL Algorithm

    Directory of Open Access Journals (Sweden)

    Roja Javadian Kootenae

    2013-03-01

    Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.

  8. DNA Coding Based Knowledge Discovery Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang

    2002-01-01

    A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.

  9. Distance Concentration-Based Artificial Immune Algorithm

    Institute of Scientific and Technical Information of China (English)

    LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang

    2005-01-01

    The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.

  10. Cloud storage based mobile assessment facility for patients with post-traumatic stress disorder using integrated signal processing algorithm

    Science.gov (United States)

    Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.

    2017-06-01

    The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.

  11. Mature Forest Resources Assets Assessment Model Based on LM-BP Algorithm%基于LM-BP算法的成熟林资产评估模型

    Institute of Scientific and Technical Information of China (English)

    赖晓燕; 颜桂梅

    2013-01-01

    In order to improve assessment efficiency and reduce costs,the application of neural networks technology in setting up forest resources assets assessment model was studied.The traditional BP neural networks had shortcomings.Based on the above,Bayesian Regularization algorithm and BP neural networks are combined to set up model of mature forest based on LM-BP.The simulation results show that this model is effective,its prediction accuracy is high.%为了提高评估效率、降低评估成本,将神经网络技术应用到森林资源资产评估建模中.针对传统BP神经网络存在的缺陷,将BP神经网络和贝叶斯正则化算法相结合,建立了基于LM-BP神经网络的成熟林评估模型.仿真结果表明,所建立的模型是有效的,预测精度高.

  12. Classification of positive blood cultures: computer algorithms versus physicians' assessment - development of tools for surveillance of bloodstream infection prognosis using population-based laboratory databases

    Directory of Open Access Journals (Sweden)

    Gradel Kim O

    2012-09-01

    Full Text Available Abstract Background Information from blood cultures is utilized for infection control, public health surveillance, and clinical outcome research. This information can be enriched by physicians’ assessments of positive blood cultures, which are, however, often available from selected patient groups or pathogens only. The aim of this work was to determine whether patients with positive blood cultures can be classified effectively for outcome research in epidemiological studies by the use of administrative data and computer algorithms, taking physicians’ assessments as reference. Methods Physicians’ assessments of positive blood cultures were routinely recorded at two Danish hospitals from 2006 through 2008. The physicians’ assessments classified positive blood cultures as: a contamination or bloodstream infection; b bloodstream infection as mono- or polymicrobial; c bloodstream infection as community- or hospital-onset; d community-onset bloodstream infection as healthcare-associated or not. We applied the computer algorithms to data from laboratory databases and the Danish National Patient Registry to classify the same groups and compared these with the physicians’ assessments as reference episodes. For each classification, we tabulated episodes derived by the physicians’ assessment and the computer algorithm and compared 30-day mortality between concordant and discrepant groups with adjustment for age, gender, and comorbidity. Results Physicians derived 9,482 reference episodes from 21,705 positive blood cultures. The agreement between computer algorithms and physicians’ assessments was high for contamination vs. bloodstream infection (8,966/9,482 reference episodes [96.6%], Kappa = 0.83 and mono- vs. polymicrobial bloodstream infection (6,932/7,288 reference episodes [95.2%], Kappa = 0.76, but lower for community- vs. hospital-onset bloodstream infection (6,056/7,288 reference episodes [83.1%], Kappa = 0.57 and

  13. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  14. Neural Network-Based Hyperspectral Algorithms

    Science.gov (United States)

    2016-06-07

    Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH

  15. Assessment of a 2D electronic portal imaging devices-based dosimetry algorithm for pretreatment and in-vivo midplane dose verification

    Directory of Open Access Journals (Sweden)

    Ali Jomehzadeh

    2016-01-01

    Conclusion: The 2D EPID-based dosimetry algorithm provides an accurate method to verify the dose of a simple 10 × 10 cm2 field, in two dimensions, inside a homogenous slab phantom and an IMRT prostate plan, as well as in 3D conformal plans (prostate, head-and-neck, and lung plans applied using an anthropomorphic phantom and in vivo. However, further investigation to improve the 2D EPID dosimetry algorithm for a head-and-neck case, is necessary.

  16. A New Page Ranking Algorithm Based On WPRVOL Algorithm

    OpenAIRE

    Roja Javadian Kootenae; Seyyed Mohsen Hashemi; mehdi afzali

    2013-01-01

    The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of ...

  17. New Gradient-Based Variable Step Size LMS Algorithms

    Directory of Open Access Journals (Sweden)

    Yanling Hao

    2008-03-01

    Full Text Available Two new gradient-based variable step size least-mean-square (VSSLMS algorithms are proposed on the basis of a concise assessment of the weaknesses of previous VSSLMS algorithms in high-measurement noise environments. The first algorithm is designed for applications where the measurement noise signal is statistically stationary and the second for statistically nonstationary noise. Steady-state performance analyses are provided for both algorithms and verified by simulations. The proposed algorithms are also confirmed by simulations to obtain both a fast convergence rate and a small steady-state excess mean square error (EMSE, and to outperform existing VSSLMS algorithms. To facilitate practical application, parameter choice guidelines are provided for the new algorithms.

  18. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  19. ILU preconditioning based on the FAPINV algorithm

    Directory of Open Access Journals (Sweden)

    Davod Khojasteh Salkuyeh

    2015-01-01

    Full Text Available A technique for computing an ILU preconditioner based on the factored approximate inverse (FAPINV algorithm is presented. We show that this algorithm is well-defined for H-matrices. Moreover, when used in conjunction with Krylov-subspace-based iterative solvers such as the GMRES algorithm, results in reliable solvers. Numerical experiments on some test matrices are given to show the efficiency of the new ILU preconditioner.

  20. A solution quality assessment method for swarm intelligence optimization algorithms.

    Science.gov (United States)

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  1. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies.

    Directory of Open Access Journals (Sweden)

    Asad Abdi

    Full Text Available Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively.This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing.

  2. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    Science.gov (United States)

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  3. Multicast Routing Based on Hybrid Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    CAO Yuan-da; CAI Gui

    2005-01-01

    A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.

  4. Power Transmission System Vulnerability Assessment Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    E. Karimi

    2012-11-01

    Full Text Available Recent blackouts in power systems have shown the necessity of vulnerability assessment. Among all factors, transmission system components have a more important role. Power system vulnerability assessment could capture cascading outages which result in large blackouts and is an effective tool for power system engineers for defining power system bottlenecks and weak points. In this paper a new method based on fault chains concept is developed which uses new measures. Genetic algorithm with an effective structure is used for finding vulnerable branches in a practical power transmission system. Analytic hierarchy process is a technique used to determine the weighting factors in fitness function of genetic algorithm. Finally, the numerical results for Isfahan Regional Electric Company are presented which verifies the effectiveness and precision of the proposed method according to the practical expriments.

  5. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  6. Immune Based Intrusion Detector Generating Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Xiao-mei; YU Ge; XIANG Guang

    2005-01-01

    Immune-based intrusion detection approaches are studied. The methods of constructing self set and generating mature detectors are researched and improved. A binary encoding based self set construction method is applied. First,the traditional mature detector generating algorithm is improved to generate mature detectors and detect intrusions faster. Then, a novel mature detector generating algorithm is proposed based on the negative selection mechanism. Accord ing to the algorithm, less mature detectors are needed to detect the abnormal activities in the network. Therefore, the speed of generating mature detectors and intrusion detection is improved. By comparing with those based on existing algo rithms, the intrusion detection system based on the algorithm has higher speed and accuracy.

  7. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  8. Lane Detection Based on Machine Learning Algorithm

    National Research Council Canada - National Science Library

    Chao Fan; Jingbo Xu; Shuai Di

    2013-01-01

    In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning...

  9. QPSO-based adaptive DNA computing algorithm.

    Science.gov (United States)

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  10. QPSO-Based Adaptive DNA Computing Algorithm

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  11. Evolutionary algorithm based configuration interaction approach

    CERN Document Server

    Chakraborty, Rahul

    2016-01-01

    A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.

  12. A New Base-6 FFT Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new FFT algorithm has been deduced, which is called the base-6 FFT algorithm. The amount for calculating the DFT of complex sequence of N=2r by the base-6 FFT algorithm is Mr(N)=14/3*Nlog6N-4N+4 for multiplication operation of real number and Ar(N)=23/3*Nlog6N-2N+2 for addition operation of real number. The amount for calculating the DFT of real sequence is a half of it with the complex sequence.

  13. Full Monte Carlo and measurement-based overall performance assessment of improved clinical implementation of eMC algorithm with emphasis on lower energy range.

    Science.gov (United States)

    Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo

    2016-06-01

    New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23.

  14. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...

  15. Assessment of a 2D electronic portal imaging devices-based dosimetry algorithm for pretreatment and in-vivo midplane dose verification

    Science.gov (United States)

    Jomehzadeh, Ali; Shokrani, Parvaneh; Mohammadi, Mohammad; Amouheidari, Alireza

    2016-01-01

    Background: The use of electronic portal imaging devices (EPIDs) is a method for the dosimetric verification of radiotherapy plans, both pretreatment and in vivo. The aim of this study is to test a 2D EPID-based dosimetry algorithm for dose verification of some plans inside a homogenous and anthropomorphic phantom and in vivo as well. Materials and Methods: Dose distributions were reconstructed from EPID images using a 2D EPID dosimetry algorithm inside a homogenous slab phantom for a simple 10 × 10 cm2 box technique, 3D conformal (prostate, head-and-neck, and lung), and intensity-modulated radiation therapy (IMRT) prostate plans inside an anthropomorphic (Alderson) phantom and in the patients (one fraction in vivo) for 3D conformal plans (prostate, head-and-neck and lung). Results: The planned and EPID dose difference at the isocenter, on an average, was 1.7% for pretreatment verification and less than 3% for all in vivo plans, except for head-and-neck, which was 3.6%. The mean γ values for a seven-field prostate IMRT plan delivered to the Alderson phantom varied from 0.28 to 0.65. For 3D conformal plans applied for the Alderson phantom, all γ1% values were within the tolerance level for all plans and in both anteroposterior and posteroanterior (AP-PA) beams. Conclusion: The 2D EPID-based dosimetry algorithm provides an accurate method to verify the dose of a simple 10 × 10 cm2 field, in two dimensions, inside a homogenous slab phantom and an IMRT prostate plan, as well as in 3D conformal plans (prostate, head-and-neck, and lung plans) applied using an anthropomorphic phantom and in vivo. However, further investigation to improve the 2D EPID dosimetry algorithm for a head-and-neck case, is necessary. PMID:28028511

  16. Coronary risk assessment among intermediate risk patients using a clinical and biomarker based algorithm developed and validated in two population cohorts.

    Science.gov (United States)

    Cross, D S; McCarty, C A; Hytopoulos, E; Beggs, M; Nolan, N; Harrington, D S; Hastie, T; Tibshirani, R; Tracy, R P; Psaty, B M; McClelland, R; Tsao, P S; Quertermous, T

    2012-11-01

    Many coronary heart disease (CHD) events occur in individuals classified as intermediate risk by commonly used assessment tools. Over half the individuals presenting with a severe cardiac event, such as myocardial infarction (MI), have at most one risk factor as included in the widely used Framingham risk assessment. Individuals classified as intermediate risk, who are actually at high risk, may not receive guideline recommended treatments. A clinically useful method for accurately predicting 5-year CHD risk among intermediate risk patients remains an unmet medical need. This study sought to develop a CHD Risk Assessment (CHDRA) model that improves 5-year risk stratification among intermediate risk individuals. Assay panels for biomarkers associated with atherosclerosis biology (inflammation, angiogenesis, apoptosis, chemotaxis, etc.) were optimized for measuring baseline serum samples from 1084 initially CHD-free Marshfield Clinic Personalized Medicine Research Project (PMRP) individuals. A multivariable Cox regression model was fit using the most powerful risk predictors within the clinical and protein variables identified by repeated cross-validation. The resulting CHDRA algorithm was validated in a Multiple-Ethnic Study of Atherosclerosis (MESA) case-cohort sample. A CHDRA algorithm of age, sex, diabetes, and family history of MI, combined with serum levels of seven biomarkers (CTACK, Eotaxin, Fas Ligand, HGF, IL-16, MCP-3, and sFas) yielded a clinical net reclassification index of 42.7% (p definition with the MESA samples and inability to include PMRP fatal CHD events. A novel risk score of serum protein levels plus clinical risk factors, developed and validated in independent cohorts, demonstrated clinical utility for assessing the true risk of CHD events in intermediate risk patients. Improved accuracy in cardiovascular risk classification could lead to improved preventive care and fewer deaths.

  17. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  18. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  19. Function Optimization Based on Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2014-01-01

    Full Text Available Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained. The method of encoding and decoding of chromosome is first described before a new adaptive selection scheme for angle parameters used for rotation gate is put forward based on the core ideas and principles of quantum computation. Eight typical functions are selected to optimize to evaluate the effectiveness and performance of vbQGA against standard Genetic Algorithm (sGA and Genetic Quantum Algorithm (GQA. The simulation results show that vbQGA is significantly superior to sGA in all aspects and outperforms GQA in robustness and solving velocity, especially for multidimensional and complicated functions.

  20. A subjective study to evaluate video quality assessment algorithms

    Science.gov (United States)

    Seshadrinathan, Kalpana; Soundararajan, Rajiv; Bovik, Alan C.; Cormack, Lawrence K.

    2010-02-01

    Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a subjective study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available subjective study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were subjectively evaluated by 38 human observers. Our study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.

  1. Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.

  2. Secure OFDM communications based on hashing algorithms

    Science.gov (United States)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  3. Two Algorithms for Web Applications Assessment

    Directory of Open Access Journals (Sweden)

    Stavros Ioannis Valsamidis

    2011-09-01

    Full Text Available The usage of web applications can be measured with the use of metrics. In a LMS, a typical web application, there are no appropriate metrics which would facilitate their qualitative and quantitative measurement. The purpose of this paper is to propose the use of existing techniques with a different way, in order to analyze the log file of a typical LMS and deduce useful conclusions. Three metrics for course usage measurement are used. It also describes two algorithms for course classification and suggestion actions. The metrics and the algorithms and were in Open eClass LMS tracking data of an academic institution. The results from 39 courses presented interest insights. Although the case study concerns a LMS it can also be applied to other web applications such as e-government, e-commerce, e-banking, blogs e.t.c.

  4. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  5. A Practical Propositional Knowledge Base Revision Algorithm

    Institute of Scientific and Technical Information of China (English)

    陶雪红; 孙伟; 等

    1997-01-01

    This paper gives an outline of knowledge base revision and some recently presented complexity results about propostitional knowledge base revision.Different methods for revising propositional knowledge base have been proposed recently by several researchers,but all methods are intractable in the general case.For practical application,this paper presents a revision method for special case,and gives its corresponding polynomial algorithm.

  6. Probabilistic risk assessment for six vapour intrusion algorithms

    NARCIS (Netherlands)

    Provoost, J.; Reijnders, L.; Bronders, J.; Van Keer, I.; Govaerts, S.

    2014-01-01

    A probabilistic assessment with sensitivity analysis using Monte Carlo simulation for six vapour intrusion algorithms, used in various regulatory frameworks for contaminated land management, is presented here. In addition a deterministic approach with default parameter sets is evaluated against obse

  7. Probabilistic risk assessment for six vapour intrusion algorithms

    NARCIS (Netherlands)

    Provoost, J.; Reijnders, L.; Bronders, J.; Van Keer, I.; Govaerts, S.

    2014-01-01

    A probabilistic assessment with sensitivity analysis using Monte Carlo simulation for six vapour intrusion algorithms, used in various regulatory frameworks for contaminated land management, is presented here. In addition a deterministic approach with default parameter sets is evaluated against

  8. Probabilistic risk assessment for six vapour intrusion algorithms

    NARCIS (Netherlands)

    Provoost, J.; Reijnders, L.; Bronders, J.; Van Keer, I.; Govaerts, S.

    2014-01-01

    A probabilistic assessment with sensitivity analysis using Monte Carlo simulation for six vapour intrusion algorithms, used in various regulatory frameworks for contaminated land management, is presented here. In addition a deterministic approach with default parameter sets is evaluated against obse

  9. Second Attribute Algorithm Based on Tree Expression

    Institute of Scientific and Technical Information of China (English)

    Su-Qing Han; Jue Wang

    2006-01-01

    One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n2 × m) (n is the number of the objects and m the number of the attributes of an information system).This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem,with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.

  10. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  11. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b

  12. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  13. A new adaptive testing algorithm for shortening health literacy assessments

    Directory of Open Access Journals (Sweden)

    Currie Leanne M

    2011-08-01

    Full Text Available Abstract Background Low health literacy has a detrimental effect on health outcomes, as well as ability to use online health resources. Good health literacy assessment tools must be brief to be adopted in practice; test development from the perspective of item-response theory requires pretesting on large participant populations. Our objective was to develop a novel classification method for developing brief assessment instruments that does not require pretesting on large numbers of research participants, and that would be suitable for computerized adaptive testing. Methods We present a new algorithm that uses principles of measurement decision theory (MDT and Shannon's information theory. As a demonstration, we applied it to a secondary analysis of data sets from two assessment tests: a study that measured patients' familiarity with health terms (52 participants, 60 items and a study that assessed health numeracy (165 participants, 8 items. Results In the familiarity data set, the method correctly classified 88.5% of the subjects, and the average length of test was reduced by about 50%. In the numeracy data set, for a two-class classification scheme, 96.9% of the subjects were correctly classified with a more modest reduction in test length of 35.7%; a three-class scheme correctly classified 93.8% with a 17.7% reduction in test length. Conclusions MDT-based approaches are a promising alternative to approaches based on item-response theory, and are well-suited for computerized adaptive testing in the health domain.

  14. Optimal Hops-Based Adaptive Clustering Algorithm

    Science.gov (United States)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  15. Numerical Algorithms Based on Biorthogonal Wavelets

    Science.gov (United States)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  16. Algorithmic Differentiation for Calculus-based Optimization

    Science.gov (United States)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  17. Lane Detection Based on Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Chao Fan

    2013-09-01

    Full Text Available In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning. After pretreatment, a set of haar-like filters were used to calculate the eigenvalue in the gray image f(x,y and edge e(x,y. Then these features were trained by using improved boosting algorithm and the final class function g(x was obtained, which was used to judge whether the point x belonging to the lane or not. To avoid the over fitting in traditional boosting, Fisher discriminant analysis was used to initialize the weights of samples. After testing by many road in all conditions, it showed that this algorithm had good robustness and real-time to recognize the lane in all challenging conditions.

  18. Web Based Genetic Algorithm Using Data Mining

    OpenAIRE

    Ashiqur Rahman; Asaduzzaman Noman; Md. Ashraful Islam; Al-Amin Gaji

    2016-01-01

    This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; fea...

  19. AN OPTIMIZATION ALGORITHM BASED ON BACTERIA BEHAVIOR

    Directory of Open Access Journals (Sweden)

    Ricardo Contreras

    2014-09-01

    Full Text Available Paradigms based on competition have shown to be useful for solving difficult problems. In this paper we present a new approach for solving hard problems using a collaborative philosophy. A collaborative philosophy can produce paradigms as interesting as the ones found in algorithms based on a competitive philosophy. Furthermore, we show that the performance - in problems associated to explosive combinatorial - is comparable to the performance obtained using a classic evolutive approach.

  20. Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.

    Science.gov (United States)

    Kiadaliri, Aliasghar A; Englund, Martin

    2016-10-04

    The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.

  1. Impacts of 21st century sea-level rise on a Danish major city - an assessment based on fine-resolution digital topography and a new flooding algorithm

    Science.gov (United States)

    Erenskjold Moeslund, Jesper; Klith Bøcher, Peder; Svenning, Jens-Christian; Mølhave, Thomas; Arge, Lars

    2009-11-01

    This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly at risk from flooding. We combine a new national Digital Elevation Model in very fine resolution (~2 meter), a new highly computationally efficient flooding algorithm that accurately models the influence of barriers, and geospatial data on real-estate values to assess the economic real-estate risk posed by future sea-level rise to Aarhus. Under the A2 and A1FI (IPCC) climate scenarios we show that relatively large residential areas in the northern part of the city as well as areas around the river running through the city are likely to become flooded in the event of extreme, but realistic weather events. In addition, most of the large Aarhus harbour would also risk flooding. As much of the area at risk represent high-value real estate, it seems clear that proactive measures other than simple abandonment should be taken in order to avoid heavy economic losses. Among the different possibilities for dealing with an increased sea level, the strategic placement of flood-gates at key potential water-inflow routes and the construction or elevation of existing dikes seems to be the most convenient, most socially acceptable, and maybe also the cheapest solution. Finally, we suggest that high-detail flooding models similar to those produced in this study will become an important tool for a climate-change-integrated planning of future city development as well as for the development of evacuation plans.

  2. Impacts of 21st century sea-level rise on a Danish major city - an assessment based on fine-resolution digital topography and a new flooding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Moeslund, Jesper Erenskjold; Svenning, Jens-Christian [Ecoinformatics and Biodiversity Group, Department of Biological Sciences, Aarhus University (Denmark); Boecher, Peder Klith [Department of Agroecology and Environment, Aarhus University (Denmark); Moelhave, Thomas; Arge, Lars, E-mail: jesper.moeslund@biology.au.d [MADALGO - Center for Massive Data Algorithmics, Aarhus University (Denmark)

    2009-11-01

    This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly at risk from flooding. We combine a new national Digital Elevation Model in very fine resolution ({approx}2 meter), a new highly computationally efficient flooding algorithm that accurately models the influence of barriers, and geospatial data on real-estate values to assess the economic real-estate risk posed by future sea-level rise to Aarhus. Under the A2 and A1FI (IPCC) climate scenarios we show that relatively large residential areas in the northern part of the city as well as areas around the river running through the city are likely to become flooded in the event of extreme, but realistic weather events. In addition, most of the large Aarhus harbour would also risk flooding. As much of the area at risk represent high-value real estate, it seems clear that proactive measures other than simple abandonment should be taken in order to avoid heavy economic losses. Among the different possibilities for dealing with an increased sea level, the strategic placement of flood-gates at key potential water-inflow routes and the construction or elevation of existing dikes seems to be the most convenient, most socially acceptable, and maybe also the cheapest solution. Finally, we suggest that high-detail flooding models similar to those produced in this study will become an important tool for a climate-change-integrated planning of future city development as well as for the development of evacuation plans.

  3. Cloud-based Evolutionary Algorithms: An algorithmic study

    CERN Document Server

    Merelo, Juan-J; Mora, Antonio M; Castillo, Pedro; Romero, Gustavo; Laredo, JLJ

    2011-01-01

    After a proof of concept using Dropbox(tm), a free storage and synchronization service, showed that an evolutionary algorithm using several dissimilar computers connected via WiFi or Ethernet had a good scaling behavior in terms of evaluations per second, it remains to be proved whether that effect also translates to the algorithmic performance of the algorithm. In this paper we will check several different, and difficult, problems, and see what effects the automatic load-balancing and asynchrony have on the speed of resolution of problems.

  4. Concordance of HIV type 1 tropism phenotype to predictions using web-based analysis of V3 sequences: composite algorithms may be needed to properly assess viral tropism.

    Science.gov (United States)

    Cabral, Gabriela Bastos; Ferreira, João Leandro de Paula; Coelho, Luana Portes Osório; Fonsi, Mylva; Estevam, Denise Lotufo; Cavalcanti, Jaqueline Souza; Brígido, Luis Fernando de Macedo

    2012-07-01

    Genotypic prediction of HIV-1 tropism has been considered a practical surrogate for phenotypic tests and recently an European Consensus has set up recommendations for its use in clinical practice. Twenty-five antiretroviral-experienced patients, all heavily treated cases with a median of 16 years of antiretroviral therapy, had viral tropism determined by the Trofile assay and predicted by HIV-1 sequencing of partial env, followed by interpretation using web-based tools. Trofile determined 17/24 (71%) as X4 tropic or dual/mixed viruses, with one nonreportable result. The use of European consensus recommendations for single sequences (geno2pheno false-positive rates 20% cutoff) would lead to 4/24 (16.7%) misclassifications, whereas a composite algorithm misclassified 1/24 (4%). The use of the geno2pheno clinical option using CD4 T cell counts at collection was useful in resolving some discrepancies. Applying the European recommendations followed by additional web-based tools for cases around the recommended cutoff would resolve most misclassifications.

  5. A novel tree structure based watermarking algorithm

    Science.gov (United States)

    Lin, Qiwei; Feng, Gui

    2008-03-01

    In this paper, we propose a new blind watermarking algorithm for images which is based on tree structure. The algorithm embeds the watermark in wavelet transform domain, and the embedding positions are determined by significant coefficients wavelets tree(SCWT) structure, which has the same idea with the embedded zero-tree wavelet (EZW) compression technique. According to EZW concepts, we obtain coefficients that are related to each other by a tree structure. This relationship among the wavelet coefficients allows our technique to embed more watermark data. If the watermarked image is attacked such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronously. The algorithm also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. In addition to the watermark, a template is inserted into the watermarked image at the same time. The template contains synchronization information, allowing the detector to determine the geometric transformations type applied to the watermarked image. Experimental results show that the proposed watermarking algorithm is robust against most signal processing attacks, such as JPEG compression, median filtering, sharpening and rotating. And it is also an adaptive method which shows a good performance to find the best areas to insert a stronger watermark.

  6. Fast Algorithms for Model-Based Diagnosis

    Science.gov (United States)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  7. Research of the Kernel Operator Library Based on Cryptographic Algorithm

    Institute of Scientific and Technical Information of China (English)

    王以刚; 钱力; 黄素梅

    2001-01-01

    The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.

  8. Fuzzy logic-based diagnostic algorithm for implantable cardioverter defibrillators.

    Science.gov (United States)

    Bárdossy, András; Blinowska, Aleksandra; Kuzmicz, Wieslaw; Ollitrault, Jacky; Lewandowski, Michał; Przybylski, Andrzej; Jaworski, Zbigniew

    2014-02-01

    The paper presents a diagnostic algorithm for classifying cardiac tachyarrhythmias for implantable cardioverter defibrillators (ICDs). The main aim was to develop an algorithm that could reduce the rate of occurrence of inappropriate therapies, which are often observed in existing ICDs. To achieve low energy consumption, which is a critical factor for implantable medical devices, very low computational complexity of the algorithm was crucial. The study describes and validates such an algorithm and estimates its clinical value. The algorithm was based on the heart rate variability (HRV) analysis. The input data for our algorithm were: RR-interval (I), as extracted from raw intracardiac electrogram (EGM), and in addition two other features of HRV called here onset (ONS) and instability (INST). 6 diagnostic categories were considered: ventricular fibrillation (VF), ventricular tachycardia (VT), sinus tachycardia (ST), detection artifacts and irregularities (including extrasystoles) (DAI), atrial tachyarrhythmias (ATF) and no tachycardia (i.e. normal sinus rhythm) (NT). The initial set of fuzzy rules based on the distributions of I, ONS and INST in the 6 categories was optimized by means of a software tool for automatic rule assessment using simulated annealing. A training data set with 74 EGM recordings was used during optimization, and the algorithm was validated with a validation data set with 58 EGM recordings. Real life recordings stored in defibrillator memories were used. Additionally the algorithm was tested on 2 sets of recordings from the PhysioBank databases: MIT-BIH Arrhythmia Database and MIT-BIH Supraventricular Arrhythmia Database. A custom CMOS integrated circuit implementing the diagnostic algorithm was designed in order to estimate the power consumption. A dedicated Web site, which provides public online access to the algorithm, has been created and is available for testing it. The total number of events in our training and validation sets was 132. In

  9. Network-based recommendation algorithms: A review

    CERN Document Server

    Yu, Fei; Gillard, Sebastien; Medo, Matus

    2015-01-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use - such as the possible influence of recommendation on the evolution of systems that use it - and finally discuss open research directions and challenges.

  10. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  11. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  12. Algorithm for Satellite Detection Capability Based on Fuzzy AHP Assessment%基于模糊层次分析法的卫星探测效能评估算法

    Institute of Scientific and Technical Information of China (English)

    王玉菊; 岳丽军

    2012-01-01

    Taking remote sensing satellites as an example, performance assessment techniques of satellite detection were studied. First, a satellite-based practical detection capability assessment hierarchy was established, and then the integrating use of motion simulation, probability theory, and continuous Markov chain and Monte Carlo methods, the corresponding algorithm model was established for assessment of satellite detection of individual targets. Then, based on fuzzy analytic hierarchy process, satellite-ship target of evaluation index weights was determined. The final adoption of an application example verifies superiority of the algorithm in the program's priority selection.%以遥感卫星为例研究了卫星探测效能评估技术。首先建立基于实用化的卫星探测能力评估递阶层次结构,接着综合运用运动仿真、概率论、连续马尔科夫链和蒙特卡罗法等方法对卫星探测评估中的单项指标建立相应的算法模型,然后基于模糊层次分析法确定卫星探测舰船目标的各评价指标权重,最后通过一个应用实例验证该算法在方案选优中的优越性。

  13. Probabilistic risk assessment for six vapour intrusion algorithms

    OpenAIRE

    Provoost, J.; Reijnders, L.; Bronders, J.; Van Keer, I.; Govaerts, S.

    2014-01-01

    A probabilistic assessment with sensitivity analysis using Monte Carlo simulation for six vapour intrusion algorithms, used in various regulatory frameworks for contaminated land management, is presented here. In addition a deterministic approach with default parameter sets is evaluated against observed concentrations for benzene, ethylbenzene and trichloroethylene. The screening-level algorithms are ranked according to accuracy and conservatism in predicting observed soil air and indoor air ...

  14. A Single Pattern Matching Algorithm Based on Character Frequency

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Based on the study of single pattern matching, MBF algorithm is proposed by imitating the string searching procedure of human. The algorithm preprocesses the pattern by using the idea of Quick Search algorithm and the already-matched pattern psefix and suffix information. In searching phase, the algorithm makes use of the!character using frequency and the continue-skip idea. The experiment shows that MBF algorithm is more efficient than other algorithms.

  15. External Threat Risk Assessment Algorithm (ExTRAA)

    Energy Technology Data Exchange (ETDEWEB)

    Powell, Troy C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Two risk assessment algorithms and philosophies have been augmented and combined to form a new algorit hm, the External Threat Risk Assessment Algorithm (ExTRAA), that allows for effective and statistically sound analysis of external threat sources in relation to individual attack methods . In addition to the attack method use probability and the attack method employment consequence, t he concept of defining threat sources is added to the risk assessment process. Sample data is tabulated and depicted in radar plots and bar graphs for algorithm demonstration purposes. The largest success of ExTRAA is its ability to visualize the kind of r isk posed in a given situation using the radar plot method.

  16. A Hybrid Algorithm for Satellite Data Transmission Schedule Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Yun-feng; WU Xiao-yue

    2008-01-01

    A hybrid scheduling algorithm based on genetic algorithm is proposed in this paper for reconnaissance satellite data transmission. At first, based on description of satellite data transmission request, satellite data transmission task modal and satellite data transmission scheduling problem model are established. Secondly, the conflicts in scheduling are discussed. According to the meaning of possible conflict, the method to divide possible conflict task set is given. Thirdly, a hybrid algorithm which consists of genetic algorithm and heuristic information is presented. The heuristic information comes from two concepts, conflict degree and conflict number. Finally, an example shows the algorithm's feasibility and performance better than other traditional algorithms.

  17. Generalized Rule Induction Based on Immune Algorithm

    Institute of Scientific and Technical Information of China (English)

    郑建国; 刘芳; 焦李成

    2002-01-01

    A generalized rule induction mechanism, immune algorithm, for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for knowledge engineers. Conceptual induction may be used to automate or assist engineers in the creation of such a classification structure. This paper introduces a new conceptual rule induction method, which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed.

  18. Continuous Attributes Discretization Algorithm based on FPGA

    Directory of Open Access Journals (Sweden)

    Guoqiang Sun

    2013-07-01

    Full Text Available The paper addresses the problem of Discretization of continuous attributes in rough set. Discretization of continuous attributes is an important part of rough set theory because most of data that we usually gain are continuous data. In order to improve processing speed of discretization, we propose a FPGA-based discretization algorithm of continuous attributes making use of the speed advantage of FPGA. Combined attributes dependency degree of rough ret, the discretization system was divided into eight modules according to block design. This method can save much time of pretreatment in rough set and improve operation efficiency. Extensive experiments on a certain fighter fault diagnosis validate the effectiveness of the algorithm.  

  19. Assessment of mesh simplification algorithm quality

    Science.gov (United States)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  20. Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction

    Institute of Scientific and Technical Information of China (English)

    TONG Liang; LU Ji-lian

    2006-01-01

    Multi-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multi-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.

  1. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  2. Asian Option Pricing Based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    YunzhongLiu; HuiyuXuan

    2004-01-01

    The cross-fertilization between artificial intelligence and computational finance has resulted in some of the most active research areas in financial engineering. One direction is the application of machine learning techniques to pricing financial products, which is certainly one of the most complex issues in finance. In the literature, when the interest rate,the mean rate of return and the volatility of the underlying asset follow general stochastic processes, the exact solution is usually not available. In this paper, we shall illustrate how genetic algorithms (GAs), as a numerical approach, can be potentially helpful in dealing with pricing. In particular, we test the performance of basic genetic algorithms by using it to the determination of prices of Asian options, whose exact solutions is known from Black-Scholesoption pricing theory. The solutions found by basic genetic algorithms are compared with the exact solution, and the performance of GAs is ewluated accordingly. Based on these ewluations, some limitations of GAs in option pricing are examined and possible extensions to future works are also proposed.

  3. Improved pulse laser ranging algorithm based on high speed sampling

    Science.gov (United States)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  4. Remote triage support algorithm based on fuzzy logic.

    Science.gov (United States)

    Achkoski, Jugoslav; Koceski, S; Bogatinov, D; Temelkovski, B; Stevanovski, G; Kocev, I

    2017-06-01

    This paper presents a remote triage support algorithm as a part of a complex military telemedicine system which provides continuous monitoring of soldiers' vital sign data gathered on-site using unobtrusive set of sensors. The proposed fuzzy logic-based algorithm takes physiological data and classifies the casualties according to their health risk level, calculated following the Modified Early Warning Score (MEWS) methodology. To verify the algorithm, eight different evaluation scenarios using random vital sign data have been created. In each scenario, the hypothetical condition of the victims was assessed in parallel both by the system as well as by 50 doctors with significant experience in the field. The results showed that there is high (0.928) average correlation of the classification results. This suggests that the proposed algorithm can be used for automated remote triage in real life-saving situations even before the medical team arrives at the spot, and shorten the response times. Moreover, an additional study has been conducted in order to increase the computational efficiency of the algorithm, without compromising the quality of the classification results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. A new optimization algorithm based on chaos

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this article, some methods are proposed for enhancing the converging velocity of the COA (chaos optimization algorithm) based on using carrier wave two times, which can greatly increase the speed and efficiency of the first carrier wave's search for the optimal point in implementing the sophisticated searching during the second carrier wave is faster and more accurate.In addition, the concept of using the carrier wave three times is proposed and put into practice to tackle the multi-variables optimization problems, where the searching for the optimal point of the last several variables is frequently worse than the first several ones.

  6. Computational Performance Assessment of k-mer Counting Algorithms.

    Science.gov (United States)

    Pérez, Nelson; Gutierrez, Miguel; Vera, Nelson

    2016-04-01

    This article is about the assessment of several tools for k-mer counting, with the purpose to create a reference framework for bioinformatics researchers to identify computational requirements, parallelizing, advantages, disadvantages, and bottlenecks of each of the algorithms proposed in the tools. The k-mer counters evaluated in this article were BFCounter, DSK, Jellyfish, KAnalyze, KHMer, KMC2, MSPKmerCounter, Tallymer, and Turtle. Measured parameters were the following: RAM occupied space, processing time, parallelization, and read and write disk access. A dataset consisting of 36,504,800 reads was used corresponding to the 14th human chromosome. The assessment was performed for two k-mer lengths: 31 and 55. Obtained results were the following: pure Bloom filter-based tools and disk-partitioning techniques showed a lesser RAM use. The tools that took less execution time were the ones that used disk-partitioning techniques. The techniques that made the major parallelization were the ones that used disk partitioning, hash tables with lock-free approach, or multiple hash tables.

  7. Function Optimization Based on Quantum Genetic Algorithm

    OpenAIRE

    Ying Sun; Hegen Xiong

    2014-01-01

    Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA) in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded c...

  8. Function Optimization Based on Quantum Genetic Algorithm

    OpenAIRE

    Ying Sun; Yuesheng Gu; Hegen Xiong

    2013-01-01

    Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on.It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed ,which is called variable-boundary-coded quantum genetic algorithm (vbQGA) in which qubit chromosomes are collapsed into variableboundary- coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained.The m...

  9. Assessing the Accuracy of Prediction Algorithms for Classification

    DEFF Research Database (Denmark)

    Baldi, P.; Brunak, Søren; Chauvin, Y.

    2000-01-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, ann correlation coefficients, and to information theoretic measures such as relative entropy and mutual...

  10. Assessment of the innovative quality of agomelatine through the Innovation Assessment Algorithm

    Directory of Open Access Journals (Sweden)

    Liliana Civalleri

    2012-09-01

    Full Text Available Aim: the aim of this study was to assess the innovative quality of a medicine based on agomelatine, authorized by the European Commission through a centralized procedure on 19th February 2009 and distributed in Italy under the brands Valdoxan® and Thymanax®.Methodology: the degree of innovation of agomelatine was determined through the Innovation Assessment Algorithm (IAA, which considers the innovative quality of a medicine as a combination of multiple properties. The algorithm may be represented as a decision tree, with each branch corresponding to a property connected with innovation and having a fixed numerical value. The sum of these values establishes the degree of innovation of the medicine. The IAA is articulated in two phases: the first assesses the efficacy of the drug based on the clinical trials presented in support of the registration application (IAA-efficacy; the second reconsiders the degree of innovation on the basis of the efficacy and safety data resulting from clinical practice once the drug has been placed on the market (IAA-effectiveness.Results and conclusions: the score obtained for agomelatine was 592.73 in the efficacy phase and 291.3 in the effectiveness phase. The total score for the two phases was 884, which is equivalent to a good degree of innovation for the molecule

  11. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  12. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  13. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    Science.gov (United States)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  14. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  15. A Survey of Grid Based Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    MR ILANGO

    2010-08-01

    Full Text Available Cluster Analysis, an automatic process to find similar objects from a database, is a fundamental operation in data mining. A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters. Clustering techniques have been discussed extensively in SimilaritySearch, Segmentation, Statistics, Machine Learning, Trend Analysis, Pattern Recognition and Classification [1]. Clustering methods can be classified into i Partitioning methods ii Hierarchical methods iii Density-based methods iv Grid-based methods v Model-based methods. Grid based methods quantize the object space into a finite number of cells (hyper-rectangles and then perform the required operations on the quantized space. The main advantage of Grid based method is its fast processing time which depends on number of cells in each dimension in quantized space. In this research paper, we present some of the grid based methods such as CLIQUE (CLustering In QUEst [2], STING (STatistical INformation Grid [3], MAFIA (Merging of Adaptive Intervals Approach to Spatial Data Mining [4], Wave Cluster [5]and O-CLUSTER (Orthogonal partitioning CLUSTERing [6], as a survey andalso compare their effectiveness in clustering data objects. We also present some of the latest developments in Grid Based methods such as Axis Shifted Grid Clustering Algorithm [7] and Adaptive Mesh Refinement [Wei-Keng Liao etc] [8] to improve the processing time of objects.

  16. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  17. ALGORITHM FOR GENERATING DEM BASED ON CONE

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital elevation model (DEM) has a variety of applications in GIS and CAD.It is the basic model for generating three-dimensional terrain feature.Generally speaking,there are two methods for building DEM.One is based upon the digital terrain model of discrete points,and is characterized by fast speed and low precision.The other is based upon triangular digital terrain model,and slow speed and high precision are the features of the method.Combining the advantages of the two methods,an algorithm for generating DEM with discrete points is presented in this paper.When interpolating elevation,this method can create a triangle which includes interpolating point and the elevation of the interpolating point can be obtained from the triangle.The method has the advantage of fast speed,high precision and less memory.

  18. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  19. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  20. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  1. Speech Enhancement based on Compressive Sensing Algorithm

    Science.gov (United States)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  2. PDE Based Algorithms for Smooth Watersheds.

    Science.gov (United States)

    Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik

    2016-04-01

    Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.

  3. A Text Categorization Algorithm Based on Sense Group

    Directory of Open Access Journals (Sweden)

    Jing Wan

    2013-02-01

    Full Text Available Giving further consideration on linguistic feature, this study proposes an algorithm of Chinese text categorization based on sense group. The algorithm extracts sense group by analyzing syntactic and semantic properties of Chinese texts and builds the category sense group library. SVM is used for the experiment of text categorization. The experimental results show that the precision and recall of the new algorithm based on sense group is better than that of traditional algorithms.

  4. POWER OPTIMIZATION ALGORITHM BASED ON XNOR/OR LOGIC

    Institute of Scientific and Technical Information of China (English)

    Wang Pengjun; Lu Jingang; Xu Jian; Dai Jing

    2009-01-01

    Based on the investigation of the XNOR/OR logical expression and the propagation algorithm of signal probability, a low power synthesis algorithm based on the XNOR/OR logic is proposed in this paper. The proposed algorithm has been implemented with C language. Fourteen Microelectronics Center North Carolina (MCNC) benchmarks are tested, and the results show that the proposed algorithm not only significantly reduces the average power consumption up to 27% without area and delay compensations, but also makes the runtime shorter.

  5. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Science.gov (United States)

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  6. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Directory of Open Access Journals (Sweden)

    Umberto Melia

    Full Text Available We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1 an initial version using the standard FS curves recommended by the ATS; and, (2 a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95% and sensitivity (96%. The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  7. Performance evaluation of sensor allocation algorithm based on covariance control

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The covariance control capability of sensor allocation algorithms based on covariance control strategy is an important index to evaluate the performance of these algorithms. Owing to lack of standard performance metric indices to evaluate covariance control capability, sensor allocation ratio, etc, there are no guides to follow in the design procedure of sensor allocation algorithm in practical applications. To meet these demands, three quantified performance metric indices are presented, which are average covariance misadjustment quantity (ACMQ), average sensor allocation ratio (ASAR) and matrix metric influence factor (MMIF), where ACMQ, ASAR and MMIF quantify the covariance control capability, the usage of sensor resources and the robustness of sensor allocation algorithm, respectively. Meanwhile, a covariance adaptive sensor allocation algorithm based on a new objective function is proposed to improve the covariance control capability of the algorithm based on information gain. The experiment results show that the proposed algorithm have the advantage over the preceding sensor allocation algorithm in covariance control capability and robustness.

  8. Chaos-Based Multipurpose Image Watermarking Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Congxu; LIAO Xuefeng; LI Zhihua

    2006-01-01

    To achieve the goal of image content authentication and copyright protection simultaneously, this paper presents a novel image dual watermarking method based on chaotic map. Firstly, the host image was split into many nonoverlapping small blocks, and the block-wise discrete cosine transform (DCT) is computed. Secondly, the robust watermarks, shuffled by the chaotic sequences, are embedded in the DC coefficients of blocks to achieve the goal of copyright protection. The semi-fragile watermarks, generated by chaotic map, are embedded in the AC coefficients of blocks to obtain the aim of image authentication. Both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  9. Review: Image Encryption Using Chaos Based algorithms

    Directory of Open Access Journals (Sweden)

    Er. Ankita Gaur

    2014-03-01

    Full Text Available Due to the development in the field of network technology and multimedia applications, every minute thousands of messages which can be text, images, audios, videos are created and transmitted over wireless network. Improper delivery of the message may leads to the leakage of important information. So encryption is used to provide security. In last few years, variety of image encryption algorithms based on chaotic system has been proposed to protect image from unauthorized access. 1-D chaotic system using logistic maps has weak security, small key space and due to the floating of pixel values, some data lose occurs and proper decryption of image becomes impossible. In this paper different chaotic maps such as Arnold cat map, sine map, logistic map, tent map have been studied.

  10. An intersection algorithm based on transformation

    Institute of Scientific and Technical Information of China (English)

    CHEN Xiao-xia; YONG Jun-hai; CHEN Yu-jian

    2006-01-01

    How to obtain intersection of curves and surfaces is a fundamental problem in many areas such as computer graphics,CAD/CAM,computer animation,and robotics.Especially,how to deal with singular cases,such as tangency or superposition,is a key problem in obtaining intersection results.A method for solving the intersection problem based on the coordinate transformation is presented.With the Lagrange multiplier method,the minimum distance between the center of a circle and a quadric surface is given as well.Experience shows that the coordinate transformation could significantly simplify the method for calculating intersection to the tangency condition.It can improve the stability of the intersection of given curves and surfaces in singularity cases.The new algorithm is applied in a three dimensional CAD software (GEMS),produced by Tsinghua University.

  11. An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique

    Institute of Scientific and Technical Information of China (English)

    SHI Yan; HUANG Cong-ming

    2006-01-01

    An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.

  12. A New Aloha Anti-Collision Algorithm Based on CDMA

    Science.gov (United States)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  13. A research on fast FCM algorithm based on weighted sample

    Institute of Scientific and Technical Information of China (English)

    KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li

    2006-01-01

    To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.

  14. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    Science.gov (United States)

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  15. Uzawa Type Algorithm Based on Dual Mixed Variational Formulation

    Institute of Scientific and Technical Information of China (English)

    王光辉; 王烈衡

    2002-01-01

    Based on the dual mixed variational formulation with three variants (stress,displacement, displacement on contact boundary ) and the unilateral beaming problem of finite element discretization, an Uzawa type iterative algorithm is presented. The convergence of this iterative algorithm is proved, and then the efficiency of the algorithm is tested by a numerical example.

  16. Replication-based Inference Algorithms for Hard Computational Problems

    OpenAIRE

    Alamino, Roberto C.; Neirotti, Juan P.; Saad, David

    2013-01-01

    Inference algorithms based on evolving interactions between replicated solutions are introduced and analyzed on a prototypical NP-hard problem - the capacity of the binary Ising perceptron. The efficiency of the algorithm is examined numerically against that of the parallel tempering algorithm, showing improved performance in terms of the results obtained, computing requirements and simplicity of implementation.

  17. Network Intrusion Detection based on GMKL Algorithm

    Directory of Open Access Journals (Sweden)

    Li Yuxiang

    2013-06-01

    Full Text Available According to the 31th statistical reports of China Internet network information center (CNNIC, by the end of December 2012, the number of Chinese netizens has reached 564 million, and the scale of mobile Internet users also reached 420 million. But when the network brings great convenience to people's life, it also brings huge threat in the life of people. So through collecting and analyzing the information in the computer system or network we can detect any possible behaviors that can damage the availability, integrity and confidentiality of the computer resource, and make timely treatment to these behaviors which have important research significance to improve the operation environment of network and network service. At present, the Neural Network, Support Vector machine (SVM and Hidden Markov Model, Fuzzy inference and Genetic Algorithms are introduced into the research of network intrusion detection, trying to build a healthy and secure network operation environment. But most of these algorithms are based on the total sample and it also hypothesizes that the number of the sample is infinity. But in the field of network intrusion the collected data often cannot meet the above requirements. It often shows high latitudes, variability and small sample characteristics. For these data using traditional machine learning methods are hard to get ideal results. In view of this, this paper proposed a Generalized Multi-Kernel Learning method to applied to network intrusion detection. The Generalized Multi-Kernel Learning method can be well applied to large scale sample data, dimension complex, containing a large number of heterogeneous information and so on. The experimental results show that applying GMKL to network attack detection has high classification precision and low abnormal practical precision.

  18. Impacts of 21st century sea-level rise on a Danish major city - an assessment based on fine-resolution digital topography and a new flooding algorithm

    DEFF Research Database (Denmark)

    Moeslund, Jesper Erenskjold; Bøcher, Peter Klith; Svenning, J.-C.

    2009-01-01

    This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly a...

  19. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands

    Science.gov (United States)

    Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo

    2017-01-01

    Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the

  20. An incremental clustering algorithm based on Mahalanobis distance

    Science.gov (United States)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  1. Saudi License Plate Recognition Algorithm Based on Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    Khaled Suwais; Rana Al-Otaibi; Ali Alshahrani

    2013-01-01

    License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.

  2. A new classification algorithm based on RGH-tree search

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.

  3. The Result Integration Algorithm Based on Matching Strategy

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The following paper provides a new algorithm: a result integration algorithm based on matching strategy. The algorithm extracts the title and the abstract of Web pages, calculates the relevance between the query string and the Web pages, decides the Web pages accepted, rejected and sorts them out in user interfaces. The experiment results indicate obviously that the new algorithms improve the precision of meta-search engine. This technique is very useful to meta-search engine.

  4. An Incremental Algorithm of Text Clustering Based on Semantic Sequences

    Institute of Scientific and Technical Information of China (English)

    FENG Zhonghui; SHEN Junyi; BAO Junpeng

    2006-01-01

    This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.

  5. A generalized GPU-based connected component labeling algorithm

    CERN Document Server

    Komura, Yukihiro

    2016-01-01

    We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.

  6. Fixed-point blind source separation algorithm based on ICA

    Institute of Scientific and Technical Information of China (English)

    Hongyan LI; Jianfen MA; Deng'ao LI; Huakui WANG

    2008-01-01

    This paper introduces the fixed-point learning algorithm based on independent component analysis (ICA);the model and process of this algorithm and simulation results are presented.Kurtosis was adopted as the estimation rule of independence.The results of the experiment show that compared with the traditional ICA algorithm based on random grads,this algorithm has advantages such as fast convergence and no necessity for any dynamic parameter,etc.The algorithm is a highly efficient and reliable method in blind signal separation.

  7. Adaptive Central Force Optimization Algorithm Based on the Stability Analysis

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2015-01-01

    Full Text Available In order to enhance the convergence capability of the central force optimization (CFO algorithm, an adaptive central force optimization (ACFO algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.

  8. Clonal Selection Based Memetic Algorithm for Job Shop Scheduling Problems

    Institute of Scientific and Technical Information of China (English)

    Jin-hui Yang; Liang Sun; Heow Pueh Lee; Yun Qian; Yan-chun Liang

    2008-01-01

    A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.

  9. Structural visualization of expert nursing: Development of an assessment and intervention algorithm for delirium following abdominal and thoracic surgeries.

    Science.gov (United States)

    Watanuki, Shigeaki; Takeuchi, Tomiko; Matsuda, Yoshimi; Terauchi, Hidemasa; Takahashi, Yukiko; Goshima, Mitsuko; Nishimoto, Yutaka; Tsuru, Satoko

    2006-01-01

    An assessment and intervention algorithm for delirium following abdominal and thoracic surgeries was developed based upon the current knowledge-base. The sources of information included literature and clinical expertise. The assessment and intervention algorithm was structured and visualized so that patient-tailored and risk-stratified prediction/prevention, assessment, and intervention could be carried out. Accumulation of clinical outcome data is necessary in the future validation study to identify the relative weight of risk factors and clinical utility of the algorithm.

  10. Adaptive RED algorithm based on minority game

    Science.gov (United States)

    Wei, Jiaolong; Lei, Ling; Qian, Jingjing

    2007-11-01

    With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.

  11. Web Based Genetic Algorithm Using Data Mining

    Directory of Open Access Journals (Sweden)

    Ashiqur Rahman

    2016-09-01

    Full Text Available This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; feature weighting is works better than just feature selection. Many leading educational institutions are working to establish an online teaching and learning presence. Several systems with different capabilities and approaches have been developed to deliver online education in an academic setting. In particular, Michigan State University (MSU has pioneered some of these systems to provide an infrastructure for online instruction. The research presented here was performed on a part of the latest online educational system developed at MSU, the Learning Online Network with Computer-Assisted Personalized Approach (LON-CAPA

  12. Path Planning for Robot Introduction Parallel Ant Colony Algorithm Based on Division of Labor and Assessment%基于评估和分工合作并行蚁群机器人路径规划

    Institute of Scientific and Technical Information of China (English)

    吕凌; 曾碧

    2011-01-01

    In order to efficiently solve the problem of mobile robot' s path planning in complex dynamic environment, proposed a new method which implemented parallel ant colony optimization algorithm based on assessment and division-cooperation of labor. This method consists of control center and independent process unit. Each process unit uses division-cooperation of labor ant colony to optimize the ant search in local and global aspect, and then sends the result to the control center. Control center was responsible for coordinating the result which got from each independent process unit and used the assessment mechanism to make final decision. Simulation result shows that this algorithm is feasible and effective.%针对复杂环境中移动机器人的导航中存在的问题,提出了一种适用于机器人路径规划的并行蚁群分工合作算法.该方法由控制中心和独立的运算单元组成,每个运算单元中使用分工合作的蚁群进行计算从而从局部和全局两个方面优化蚁群的路径搜索,并将计算发送给处于控制中心的计算机,控制中心则负责处理每个运算单元发送的阶段性的路径搜索结果并利用评估机制对每个计算机得出的结果做最后的决策.从仿真结果可以看出该算法是有效且可行的.

  13. 基于注意力选择与感觉容量的图像质量评价算法%An Algorithm of Image Quality Assessment Based on Attention Selection and Sense Capacity

    Institute of Scientific and Technical Information of China (English)

    郑江云

    2014-01-01

    Image quality assessment simulates human subjective opinion with calculated model. Based on different sensitivities in the frequency domain and attention selection of human visual system (HVS), a new algorithm of image quality assessment is proposed. First, high and low frequency signals of image are expressed by coefficients of visual saliency map and wavelet approxi-mation coefficients. Then, low frequency error and high frequency error are calculated by different means, the product of two errors is adopted as objective evaluation of quality. The new method is validated with subjective quality scores on LIVE database, linear correlation coefficient reaches 0. 9064. Experimental results show that the performance of the new method is superior to the algo-rithms of PSNR and SSIM.%图像质量评价是用可计算的模型模拟人的主观判断。根据人眼对图像高低频失真的感觉容量不同及注意力选择特性,本文提出一种新的图像质量评价算法。首先,将图像的高低频信号分别选用视觉显著图的系数和小波变换的近似系数表示,然后,采用不同方法分别计算图像高频和低频失真量,两种失真量的乘积就代表图像质量评价。这种客观评价算法与 LIVE图库中 DMOS 的线性相关性达到0.9064以上,明显优于 PSNR 和 SSIM 等算法。

  14. A meta-learning system based on genetic algorithms

    Science.gov (United States)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  15. DYNAMIC LABELING BASED FPGA DELAY OPTIMIZATION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    吕宗伟; 林争辉; 张镭

    2001-01-01

    DAG-MAP is an FPGA technology mapping algorithm for delay optimization and the labeling phase is the algorithm's kernel. This paper studied the labeling phase and presented an improved labeling method. It is shown through the experimental results on MCNC benchmarks that the improved method is more effective than the original method while the computation time is almost the same.

  16. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  17. Gradient-based Taxis Algorithms for Network Robotics

    OpenAIRE

    Blum, Christian; Hafner, Verena V.

    2014-01-01

    Finding the physical location of a specific network node is a prototypical task for navigation inside a wireless network. In this paper, we consider in depth the implications of wireless communication as a measurement input of gradient-based taxis algorithms. We discuss how gradients can be measured and determine the errors of this estimation. We then introduce a gradient-based taxis algorithm as an example of a family of gradient-based, convergent algorithms and discuss its convergence in th...

  18. LEACH Algorithm Based on Load Balancing

    Directory of Open Access Journals (Sweden)

    Wangang Wang

    2013-09-01

    Full Text Available This paper discusses advantages of LEACH Algorithm and the existing improved model which takes the famous hierarchy clustering routing protocol LEACH Algorithm as researching object. Then the paper indicates the problem that in the algorithm capacity factor of cluster head node is not taken into account leading the structure of clusters to be not so reasonable. This research discusses an energy-uniform cluster and cluster head selecting mechanism in which “Pseudo cluster head” concept is introduced in order to coordinate with “Load Monitor” Mechanism and “Load Leisure” Mechanism to maintain load balancing of cluster head character and stability of network topology. On the basis of LEACH Protocol improving algorithm of LEACH-C, CEFL and DCHS. NS2 simulation instrument is applied to do simulation analysis on the improved algorithm. Simulation result shows that LEACH-P Protocol effectively increase energy utilization efficiency, lengthens network lifetime and balances network load.  

  19. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    Science.gov (United States)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  20. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  1. 一种基于相位特征的小波域图像质量评价算法%An Image Quality Assessment Algorithm Based on Phase Feature for Wavelet Domain

    Institute of Scientific and Technical Information of China (English)

    李爱华; 王丽彬

    2015-01-01

    基于像素值的图像质量评价算法忽视了图像自身的结构特征信息,不能较好地评测某些失真类型的图像质量。针对这一问题,本文提出一种基于相位特征的图像质量评价算法。该算法根据人眼对图像的理解主要建立于图像低层次结构这一特点上,给出一种图像质量评价函数的计算方法。首先将图像的相位一致性作为评价函数的第一个特征;然后将图像的小波变换模作为评价函数的第二个特征。最后基于这2个特征获取整体图像的质量评价测度。仿真实验验证了该算法的有效性,结果表明提出的图像质量评价算法和人类主观评价感受具有良好的一致性。%For the pixel-based method for image quality assessment ignores the structural features of natural images, which fails to measure some particular distortion types.Therefore, an image quality assessment algorithm by using phase feature is proposed. This method is proposed based on the fact that human eye understands an image mainly according to its low-level features.First, the phase congruency is considered as the primary feature in computing the assessment function.Then, the modulus of wavelet transform is considered as the second feature in computing the assessment function.Finally, the whole image quality assessment value can be obtained based on these two features.Experimental results on standard image database demonstrate that the effective-ness of the proposed method.Meanwhile, it has a good consistency with the subjective assessment of human beings.

  2. An Energy Consumption Optimized Clustering Algorithm for Radar Sensor Networks Based on an Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Jiang Ting

    2010-01-01

    Full Text Available We optimize the cluster structure to solve problems such as the uneven energy consumption of the radar sensor nodes and random cluster head selection in the traditional clustering routing algorithm. According to the defined cost function for clusters, we present the clustering algorithm which is based on radio-free space path loss. In addition, we propose the energy and distance pheromones based on the residual energy and aggregation of the radar sensor nodes. According to bionic heuristic algorithm, a new ant colony-based clustering algorithm for radar sensor networks is also proposed. Simulation results show that this algorithm can get a better balance of the energy consumption and then remarkably prolong the lifetime of the radar sensor network.

  3. Genetic Algorithms, Neural Networks, and Time Effectiveness Algorithm Based Air Combat Intelligence Simulation System

    Institute of Scientific and Technical Information of China (English)

    曾宪钊; 成冀; 安欣; 方礼明

    2002-01-01

    This paper introduces a new Air Combat Intelligence Simulation System (ACISS) in a 32 versus 32 air combat, describes three methods: Genetic Algorithms (GA) in the multi-targeting decision and Evading Missile Rule Base learning, Neural Networks (NN) in the maneuvering decision, and Time Effectiveness Algorithm (TEA) in the adjudicating an air combat and the evaluating evading missile effectiveness.

  4. Parallel Implementation of Classification Algorithms Based on Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wenbo Wang

    2012-09-01

    Full Text Available As an important task of data mining, Classification has been received considerable attention in many applications, such as information retrieval, web searching, etc. The enlarging volumes of information emerging by the progress of technology and the growing individual needs of data mining, makes classifying of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel classification algorithms. This paper introduces the classification algorithms and cloud computing briefly, based on it analyses the bad points of the present parallel classification algorithms, then addresses a new model of parallel classifying algorithms. And it mainly introduces a parallel Naïve Bayes classification algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm improves the original algorithm performance, and it can process large datasets efficiently on commodity hardware.

  5. 基于VIKOR算法的中层管理者跨文化胜任力评估%The Assessment of Middle Managers ’ Cross-cultural Competency Based on VIKOR Algorithm

    Institute of Scientific and Technical Information of China (English)

    王晓东; 蔡建峰

    2013-01-01

    本文对在中国企业跨国经营的过程起着承上启下作用的中层管理者跨文化胜任力进行了研究和深化,建立了跨文化胜任力指标体系。接着首次引入可很好评估竞争力的VIKOR 算法到胜任力评价中去,建立了基于VIKOR的管理者跨文化胜任力评估模型。最后实例验证。%This paper established the middle managers ’ cross-cultural competency indicators system with the studying and depth research on cross-cultural competency of the multinational enterprises ’ middle managers who plays a connecting role. Then the VIKOR algorithm which is effective well in the assessment of competitiveness was introduced into the competency evaluation and the middle managers ’ intercultural competency assessment model was established based on VIKOR.. Finally, it was verified with an example.

  6. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  7. SAR Image Segmentation Based On Hybrid PSOGSA Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Amandeep Kaur

    2014-09-01

    Full Text Available Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of Synthetic Aperture Radar (SAR images is still a challenging problem. We proposed a fast SAR image segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA. In this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold. Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in terms of segmentation accuracy, segmentation time, and Thresholding.

  8. New Iris Localization Method Based on Chaos Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu

    2005-01-01

    This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.

  9. A Wire-speed Routing Lookup Algorithm Based on TCAM

    Institute of Scientific and Technical Information of China (English)

    李小勇; 王志恒; 白英彩; 刘刚

    2004-01-01

    An internal structure of Ternary Content Addressable Memory (TCAM) is designed and a Sorting Prefix Block (SPB) algorithm is presented, which is a wire-speed routing lookup algorithm based on TCAM. SPB algorithm makes use of the parallelism of TCAM adequately, and improves the utilization of TCAM by optimum partitions. With the aid of effective management algorithm and memory image, SPB separates critical searching from assistant searching, and improves the searching effect. One performance test indicates that this algorithm can work with different TCAM to meet the requirement of wire-speed routing lookup.

  10. Information criterion based fast PCA adaptive algorithm

    Institute of Scientific and Technical Information of China (English)

    Li Jiawen; Li Congxin

    2007-01-01

    The novel information criterion (NIC) algorithm can find the principal subspace quickly, but it is not an actual principal component analysis (PCA) algorithm and hence it cannot find the orthonormal eigen-space which corresponds to the principal component of input vector.This defect limits its application in practice.By weighting the neural network's output of NIC, a modified novel information criterion (MNIC) algorithm is presented.MNIC extractes the principal components and corresponding eigenvectors in a parallel online learning program, and overcomes the NIC's defect.It is proved to have a single global optimum and nonquadratic convergence rate, which is superior to the conventional PCA online algorithms such as Oja and LMSER.The relationship among Oja, LMSER and MNIC is exhibited.Simulations show that MNIC could converge to the optimum fast.The validity of MNIC is proved.

  11. A Multi-Scale Gradient Algorithm Based on Morphological Operators

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.

  12. Cycle-Based Algorithm Used to Accelerate VHDL Simulation

    Institute of Scientific and Technical Information of China (English)

    杨勋; 刘明业

    2000-01-01

    Cycle-based algorithm has very high performance for the simula-tion of synchronous design, but it is confined to synchronous design and it is not as accurate as event-driven algorithm. In this paper, a revised cycle-based algorithm is proposed and implemented in VHDL simulator. Event-driven simulation engine and cycle-based simulation engine have been imbedded in the same simulation environ-ment and can be used to asynchronous design and synchronous design respectively. Thus the simulation performance is improved without losing the flexibility and ac-curacy of event-driven algorithm.

  13. Assessment of a Heuristic Algorithm for Scheduling Theater Security Cooperation Naval Missions

    Science.gov (United States)

    2009-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS ASSESSMENT OF A HEURISTIC ALGORITHM FOR SCHEDULING THEATER SECURITY...blank) 2. REPORT DATE March 2009 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Assessment of a Heuristic Algorithm for...INTENTIONALLY LEFT BLANK iii Approved for public release; distribution is unlimited ASSESSMENT OF A HEURISTIC ALGORITHM FOR SCHEDULING THEATER

  14. Breech Mechanism Technical Condition Assessment of Certain Gun Based on AdaBoost-SVM Algorithm%基于AdaBoost-SVM算法的某火炮炮闩技术状态评估

    Institute of Scientific and Technical Information of China (English)

    杨振军; 苏忠亭; 王伟; 王琳

    2012-01-01

    A kind of on-line technical condition assessment method based on AdaBoost-SVM pattern recognition is introduced aiming at the actualities that the technical condition assessment of active arming lies mainly on the manual disassembly.The breech mechanism technical condition parameters are tested by artificial recoiling on-line test equipment.The support vector machine is introduced to build the breech mechanism technical condition assessment model based on the correlativity analysis character extraction of test data.Each time of iteration will reevaluate weight of the wrong swatch and every heft class organs by combining AdaBoost algorithm with the assessment model,the new heft class organ would be engendered in the next iteration to optimize the result.At last,all heft class organs are combined with each other to fulfill the assessment according to their weights.The correctness and validity of assessment model are proved by instance analysis.%针对现役装备技术状态评估多依赖于手工拆卸的现状,提出一种基于AdaBoost-SVM模式识别算法的在线技术状态评估方法。利用人工后坐在线检测设备对炮闩装置技术状态参数进行检测,在对检测数据进行相关性分析特征提取的基础上,引入支持向量机模式识别方法,建立炮闩装置技术状态评估模型。通过将评估模型与Ada-Boost算法相结合,每次迭代都根据测试精度对分类错误的样本点和各分量分类器的权重重新赋值,在下一次迭代中形成新的分量分类器以优化分类结果,最终将各分量分类器依其权重综合完成评估。实例分析结果验证了评估模型的正确性和有效性。

  15. QOS-BASED MULTICAST ROUTING OPTIMIZATION ALGORITHMS FOR INTERNET

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Most of the multimedia applications require strict Quality-of-Service (QoS) guarantee during the communication between a single source and multiple destinations. The paper mainly presents a QoS Multicast Routing algorithms based on Genetic Algorithm (QMRGA). Simulation results demonstrate that the algorithm is capable of discovering a set of QoS-based near optimized, non-dominated multicast routes within a few iterations, even for the networks environment with uncertain parameters.

  16. Improved FCLSD algorithm based on LTE/LTE-A system

    Directory of Open Access Journals (Sweden)

    Kewen Liu

    2011-08-01

    Full Text Available In order to meet the high data rate, large capacity and low latency in LTE, advanced MIMO technology has been introduced in LTE system, which becomes one of the core technologies in physical layer. In a variety of MIMO detection algorithms, the ZF and MMSE linear detection algorithms are the most simple, but the performance is poor. MLD algorithm can achieve optimal detection performance, but it’s too complexity to be applied in practice. CLSD algorithm has similar detection performance and lower complexity with the MLD algorithm, but the uncertainty of complexity will bring hardware difficulties. FCLSD algorithm can maximize the advantages of CLSD algorithm and solve difficult problems in practice. Based on advanced FCLSD algorithm and combined with LTE / LTE-A practical system applications, this article designed two improved algorithms. The two improved algorithms can be flexibly and adaptively used in various antenna configurations and modulation scene in LTE / LTE-A spatial multiplexing MIMO system. The Simulation results show that the improved algorithm can achieve an approximate performance to the original FCLSD algorithm; in addition, it has a fixed complexity and could be carried out by parallel processing.

  17. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    Science.gov (United States)

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  18. Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    Lei He

    2013-05-01

    Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.

  19. Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ju

    2013-01-01

    Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.

  20. An optimal scheduling algorithm based on task duplication

    Institute of Scientific and Technical Information of China (English)

    Ruan Youlin; Liu Gan; Zhu Guangxi; Lu Xiaofeng

    2005-01-01

    When the communication time is relatively shorter than the computation time for every task, the task duplication based scheduling (TDS) algorithm proposed by Darbha and Agrawal generates an optimal schedule. Park and Choe also proposed an extended TDS algorithm whose optimality condition is less restricted than that of TDS algorithm, but the condition is very complex and is difficult to satisfy when the number of tasks is large. An efficient algorithm is proposed whose optimality condition is less restricted and simpler than both of the algorithms, and the schedule length is also shorter than both of the algorithms. The time complexity of the proposed algorithm is O ( v2 ), where v represents the number of tasks.

  1. OPTIMIZATION BASED ON LMPROVED REAL—CODED GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ShiYu; YuShenglin

    2002-01-01

    An improved real-coded genetic algorithm is pro-posed for global optimization of functionsl.The new algo-rithm is based om the judgement of the searching perfor-mance of basic real-coded genetic algorithm.The opera-tions of basic real-coded genetic algorithm are briefly dis-cussed and selected.A kind of chaos sequence is described in detail and added in the new algorithm ad a disturbance factor.The strategy of field partition is also used to im-prove the strcture of the new algorithm.Numerical ex-periment shows that the mew genetic algorithm can find the global optimum of complex funtions with satistaiting precision.

  2. Robust adaptive beamforming algorithm based on Bayesian approach

    Institute of Scientific and Technical Information of China (English)

    Xin SONG; Jinkuan WANG; Yinghua HAN; Han WANG

    2008-01-01

    The performance of adaptive array beamform-ing algorithms substantially degrades in practice because of a slight mismatch between actual and presumed array res-ponses to the desired signal. A novel robust adaptive beam-forming algorithm based on Bayesian approach is therefore proposed. The algorithm responds to the current envi-ronment by estimating the direction of arrival (DOA) of the actual signal from observations. Computational com-plexity of the proposed algorithm can thus be reduced com-pared with other algorithms since the recursive method is used to obtain inverse matrix. In addition, it has strong robustness to the uncertainty of actual signal DOA and makes the mean output array signal-to-interference-plus-noise ratio (SINR) consistently approach the optimum. Simulation results show that the proposed algorithm is bet-ter in performance than conventional adaptive beamform-ing algorithms.

  3. Generating Decision Trees Method Based on Improved ID3 Algorithm

    Institute of Scientific and Technical Information of China (English)

    Yang Ming; Guo Shuxu1; Wang Jun3

    2011-01-01

    The ID3 algorithm is a classical learning algorithm of decision tree in data mining.The algorithm trends to choosing the attribute with more values,affect the efficiency of classification and prediction for building a decision tree.This article proposes a new approach based on an improved ID3 algorithm.The new algorithm introduces the importance factor λ when calculating the information entropy.It can strengthen the label of important attributes of a tree and reduce the label of non-important attributes.The algorithm overcomes the flaw of the traditional ID3 algorithm which tends to choose the attributes with more values,and also improves the efficiency and flexibility in the process of generating decision trees.

  4. Fuzzy Rules for Ant Based Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Amira Hamdi

    2016-01-01

    Full Text Available This paper provides a new intelligent technique for semisupervised data clustering problem that combines the Ant System (AS algorithm with the fuzzy c-means (FCM clustering algorithm. Our proposed approach, called F-ASClass algorithm, is a distributed algorithm inspired by foraging behavior observed in ant colonyT. The ability of ants to find the shortest path forms the basis of our proposed approach. In the first step, several colonies of cooperating entities, called artificial ants, are used to find shortest paths in a complete graph that we called graph-data. The number of colonies used in F-ASClass is equal to the number of clusters in dataset. Hence, the partition matrix of dataset founded by artificial ants is given in the second step, to the fuzzy c-means technique in order to assign unclassified objects generated in the first step. The proposed approach is tested on artificial and real datasets, and its performance is compared with those of K-means, K-medoid, and FCM algorithms. Experimental section shows that F-ASClass performs better according to the error rate classification, accuracy, and separation index.

  5. A POCS-Based Algorithm for Blocking Artifacts Reduction

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yi-hong; CHENG Guo-hua; YU Song-yu

    2006-01-01

    An algorithm for blocking artifacts reduction in DCT domain for block-based image coding was developed. The algorithm is based on the projection onto convex set (POCS) theory. Due to the fact that the DCT characteristics of shifted blocks are different caused by the blocking artifacts, a novel smoothness constraint set and the corresponding projection operator were proposed to reduce the blocking artifacts by discarding the undesired high frequency coefficients in the shifted DCT blocks. The experimental results show that the proposed algorithm outperforms the conventional algorithms in terms of objective quality, subjective quality, and convergence property.

  6. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  7. Heuristic Reduction Algorithm Based on Pairwise Positive Region

    Institute of Scientific and Technical Information of China (English)

    QI Li; LIU Yu-shu

    2007-01-01

    To guarantee the optimal reduct set, a heuristic reduction algorithm is proposed, which considers the distinguishing information between the members of each pair decision classes. Firstly the pairwise positive region is defined, based on which the pairwise significance measure is calculated between the members of each pair classes. Finally the weighted pairwise significance of attribute is used as the attribute reduction criterion, which indicates the necessity of attributes very well. By introducing the noise tolerance factor, the new algorithm can tolerate noise to some extent. Experimental results show the advantages of our novel heuristic reduction algorithm over the traditional attribute dependency based algorithm.

  8. Survey of gene splicing algorithms based on reads.

    Science.gov (United States)

    Si, Xiuhua; Wang, Qian; Zhang, Lei; Wu, Ruo; Ma, Jiquan

    2017-09-05

    Gene splicing is the process of assembling a large number of unordered short sequence fragments to the original genome sequence as accurately as possible. Several popular splicing algorithms based on reads are reviewed in this article, including reference genome algorithms and de novo splicing algorithms (Greedy-extension, Overlap-Layout-Consensus graph, De Bruijn graph). We also discuss a new splicing method based on the MapReduce strategy and Hadoop. By comparing these algorithms, some conclusions are drawn and some suggestions on gene splicing research are made.

  9. PHC: A Fast Partition and Hierarchy-Based Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHOU HaoFeng(周皓峰); YUAN QingQing(袁晴晴); CHENG ZunPing(程尊平); SHI BaiLe(施伯乐)

    2003-01-01

    Cluster analysis is a process to classify data in a specified data set. In this field,much attention is paid to high-efficiency clustering algorithms. In this paper, the features in thecurrent partition-based and hierarchy-based algorithms are reviewed, and a new hierarchy-basedalgorithm PHC is proposed by combining advantages of both algorithms, which uses the cohesionand the closeness to amalgamate the clusters. Compared with similar algorithms, the performanceof PHC is improved, and the quality of clustering is guaranteed. And both the features were provedby the theoretic and experimental analyses in the paper.

  10. Brain MR image segmentation improved algorithm based on probability

    Science.gov (United States)

    Liao, Hengxu; Liu, Gang; Guo, Xiantang

    2017-08-01

    Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.

  11. Adaptive image contrast enhancement algorithm for point-based rendering

    Science.gov (United States)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  12. Research on Algorithms for Mining Distance-Based Outliers

    Institute of Scientific and Technical Information of China (English)

    WANGLizhen; ZOULikun

    2005-01-01

    The outlier detection is an important and valuable research in KDD (Knowledge discover in database). The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even weather forecast. In existing methods that we have seen for finding outliers, the notion of DB-(Distance-based) outliers is not restricted computationally to small values of the number of dimensions k and goes beyond the data space. Here, we study algorithms for mining DB-outliers. We focus on developing algorithms unlimited by k. First, we present a Partition-based algorithm (the PBA). The key idea is to gain efficiency by divide-and-conquer. Second, we present an optimized algorithm called Object-class-based algorithm (the OCBA). The computing of this algorithm has nothing to do with k and the efficiency of this algorithm is as good as the cell-based algorithm. We provide experimental results showing that the two new algorithms have better execution efficiency.

  13. Grover quantum searching algorithm based on weighted targets

    Institute of Scientific and Technical Information of China (English)

    Li Panchi; Li Shiyong

    2008-01-01

    The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.

  14. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  15. Lazy learner text categorization algorithm based on embedded feature selection

    Institute of Scientific and Technical Information of China (English)

    Yan Peng; Zheng Xuefeng; Zhu Jianyong; Xiao Yunhong

    2009-01-01

    To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although having been widely used, FS process will generally cause information losing and then have much side-effect on the whole performance of TC algorithms. On the basis of the sparsity characteristic of text vectors, a new TC algorithm based on lazy feature selection (LFS) is presented. As a new type of embedded feature selection approach, the LFS method can greatly reduce the dimension of features without any information losing, which can improve both efficiency and performance of algorithms greatly. The experiments show the new algorithm can simultaneously achieve much higher both performance and efficiency than some of other classical TC algorithms.

  16. Global optimal path planning for mobile robot based on improved Dijkstra algorithm and ant system algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A novel method of global optimal path planning for mobile robot was proposed based on the improved Dijkstra algorithm and ant system algorithm. This method includes three steps: the first step is adopting the MAKLINK graph theory to establish the free space model of the mobile robot, the second step is adopting the improved Dijkstra algorithm to find out a sub-optimal collision-free path, and the third step is using the ant system algorithm to adjust and optimize the location of the sub-optimal path so as to generate the global optimal path for the mobile robot. The computer simulation experiment was carried out and the results show that this method is correct and effective. The comparison of the results confirms that the proposed method is better than the hybrid genetic algorithm in the global optimal path planning.

  17. A real time vehicles detection algorithm for vision based sensors

    CERN Document Server

    Płaczek, Bartłomiej

    2011-01-01

    A vehicle detection plays an important role in the traffic control at signalised intersections. This paper introduces a vision-based algorithm for vehicles presence recognition in detection zones. The algorithm uses linguistic variables to evaluate local attributes of an input image. The image attributes are categorised as vehicle, background or unknown features. Experimental results on complex traffic scenes show that the proposed algorithm is effective for a real-time vehicles detection.

  18. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    OpenAIRE

    Xue Shan; Liu Song

    2015-01-01

    Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analy...

  19. A New RWA Algorithm Based on Multi-Objective

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this article, we studied the associated research problems and challenges on routing and wavelength assignment (RWA) in WDM (wavelength division multiplexing) networks. Various RWA approaches are examined and compared. We proposed a new RWA algorithm based on multi-objective. In this new algorithm, we consider multiple network optimizing objectives to setup a lightpath with maximize profit and shortest path under the limited resources. By comparing and analyzing, the proposed algorithm is much better ...

  20. Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava

    2016-01-01

    In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...

  1. TOA estimation algorithm based on multi-search

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new time of arrival (TOA) estimation algorithm is proposed. The algorithm computes the optimal sub-correlation length based on the SNR theory. So the robust of TOA acquirement is guaranteed very well. Then, according to the actual transmission environment and network system, the multi-search method is given. From the simulation result,the algorithm shows a very high application value in the realization of wireless location system (WLS).

  2. Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava

    2016-01-01

    In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...

  3. Hindi Parser-based on CKY algorithm

    OpenAIRE

    Nitin Hambir; Ambrish Srivastav

    2012-01-01

    Hindi parser is a tool which takes Hindi sentence and verifies whether or not given Hindi sentence is correct according to Hindi language grammar. Parsing is important for Natural Language Processing tools. Hindi parser uses the CKY (Coke- Kasami-Younger) parsing algorithm for Parsing of Hindi language. It parses whole sentence and generate a matrix

  4. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  5. Intelligent Hybrid Cluster Based Classification Algorithm for Social Network Analysis

    Directory of Open Access Journals (Sweden)

    S. Muthurajkumar

    2014-05-01

    Full Text Available In this paper, we propose an hybrid clustering based classification algorithm based on mean approach to effectively classify to mine the ordered sequences (paths from weblog data in order to perform social network analysis. In the system proposed in this work for social pattern analysis, the sequences of human activities are typically analyzed by switching behaviors, which are likely to produce overlapping clusters. In this proposed system, a robust Modified Boosting algorithm is proposed to hybrid clustering based classification for clustering the data. This work is useful to provide connection between the aggregated features from the network data and traditional indices used in social network analysis. Experimental results show that the proposed algorithm improves the decision results from data clustering when combined with the proposed classification algorithm and hence it is proved that of provides better classification accuracy when tested with Weblog dataset. In addition, this algorithm improves the predictive performance especially for multiclass datasets which can increases the accuracy.

  6. A Vehicle Detection Algorithm Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  7. A PRESSURE-BASED ALGORITHM FOR CAVITATING FLOW COMPUTATIONS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ling-xin; ZHAO Wei-guo; SHAO Xue-ming

    2011-01-01

    A pressure-based algorithm for the prediction of cavitating flows is presented. The algorithm employs a set of equations including the Navier-Stokes equations and a cavitation model explaining the phase change between liquid and vapor. A pressure-based method is used to construct the algorithm and the coupling between pressure and velocity is considered. The pressure correction equation is derived from a new continuity equation which employs a source term related to phase change rate instead of the material derivative of density Dp/Dt.Thispressure-based algorithm allows for the computation of steady or unsteady,2-Dor 3-D cavitating flows. Two 2-D cases, flows around a flat-nose cylinder and around a NACA0015 hydrofoil, are simulated respectively, and the periodic cavitation behaviors associated with the re-entrant jets are captured. This algorithm shows good capability of computating time-dependent cavitating flows.

  8. New Iterated Decoding Algorithm Based on Differential Frequency Hopping System

    Institute of Scientific and Technical Information of China (English)

    LIANG Fu-lin; LUO Wei-xiong

    2005-01-01

    A new iterated decoding algorithm is proposed for differential frequency hopping (DFH) encoder concatenated with multi-frequency shift-key (MFSK) modulator. According to the character of the frequency hopping (FH) pattern trellis produced by DFH function, maximum a posteriori (MAP) probability theory is applied to realize the iterate decoding of it. Further, the initial conditions for the new iterate algorithm based on MAP algorithm are modified for better performance. Finally, the simulation result compared with that from traditional algorithms shows good anti-interference performance.

  9. Topology control based on quantum genetic algorithm in sensor networks

    Institute of Scientific and Technical Information of China (English)

    SUN Lijuan; GUO Jian; LU Kai; WANG Ruchuan

    2007-01-01

    Nowadays,two trends appear in the application of sensor networks in which both multi-service and quality of service (QoS)are supported.In terms of the goal of low energy consumption and high connectivity,the control on topology is crucial.The algorithm of topology control based on quantum genetic algorithm in sensor networks is proposed.An advantage of the quantum genetic algorithm over the conventional genetic algorithm is demonstrated in simulation experiments.The goals of high connectivity and low consumption of energy are reached.

  10. Surname Inherited Algorithm Research Based on Artificial Immune System

    Directory of Open Access Journals (Sweden)

    Jing Xie

    2013-06-01

    Full Text Available To keep the diversity of antibodies in artificial immune system evolution process, this paper puts forward a kind of increase simulation surname inheritance algorithm based on the clonal selection algorithm, and identification and forecast the Vibration Data about CA6140 horizontal  lathe machining slender shaft workpiece prone . The results show that the algorithm has the characteristics of flexible application, strong adaptability, an effective approach to improve efficiency of the algorithm, a good performance of global searching and broad application prospect.

  11. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  12. Support vector classification algorithm based on variable parameter linear programming

    Institute of Scientific and Technical Information of China (English)

    Xiao Jianhua; Lin Jian

    2007-01-01

    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  13. A multicast dynamic wavelength assignment algorithm based on matching degree

    Institute of Scientific and Technical Information of China (English)

    WU Qi-wu; ZHOU Xian-wei; WANG Jian-ping; YIN Zhi-hong; ZHANG Long

    2009-01-01

    The wavelength assignment with multiple multicast requests in fixed routing WDM network is studied. A new multicast dynamic wavelength assignment algorithm is presented based on matching degree. First, the wavelength matching degree between available wavelengths and multicast routing trees is introduced into the algorithm. Then, the wavelength assign-ment is translated into the maximum weight matching in bipartite graph, and this matching problem is solved by using an extended Kuhn-Munkres algorithm. The simulation results prove that the overall optimal wavelength assignment scheme is obtained in polynomial time. At the same time, the proposed algorithm can reduce the connecting blocking probability and improve the system resource utilization.

  14. A new parallel algorithm for image matching based on entropy

    Institute of Scientific and Technical Information of China (English)

    董开坤; 胡铭曾

    2001-01-01

    Presents a new parallel image matching algorithm based on the concept of entropy feature vector and suitable to SIMD computer, which, in comparison with other algorithms, has the following advantages: ( 1 ) The spatial information of an image is appropriately introduced into the definition of image entropy. (2) A large number of multiplication operations are eliminated, thus the algorithm is sped up. (3) The shortcoming of having to do global calculation in the first instance is overcome, and concludes the algorithm has very good locality and is suitable for parallel processing.

  15. The RSA Cryptoprocessor Hardware Implementation Based on Modified Montgomery Algorithm

    Institute of Scientific and Technical Information of China (English)

    CHEN Bo; WANG Xu; RONG Meng-tian

    2005-01-01

    RSA (Rivest-Shamir-Adleman)public-key cryptosystem is widely used in the information security area such as encryption and digital signature. Based on the modified Montgomery modular multiplication algorithm, a new architecture using CSA(carry save adder)was presented to implement modular multiplication. Compared with the popular modular multiplication algorithms using two CSA, the presented algorithm uses only one CSA, so it can improve the time efficiency of RSA cryptoprocessor and save about half of hardware resources for modular multiplication. With the increase of encryption data size n, the clock cycles for the encryption procedure reduce in T(n2) , compared with the modular multiplication algorithms using two CSA.

  16. An Incremental Rule Acquisition Algorithm Based on Rough Set

    Institute of Scientific and Technical Information of China (English)

    YU Hong; YANG Da-chun

    2005-01-01

    Rough Set is a valid mathematical theory developed in recent years,which has the ability to deal with imprecise,uncertain,and vague information.This paper presents a new incremental rule acquisition algorithm based on rough set theory.First,the relation of the new instances with the original rule set is discussed.Then the change law of attribute reduction and value reduction are studied when a new instance is added.Follow,a new incremental learning algorithm for decision tables is presented within the framework of rough set.Finally,the new algorithm and the classical algorithm are analyzed and compared by theory and experiments.

  17. CUDT: a CUDA based decision tree algorithm.

    Science.gov (United States)

    Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  18. Novel Adaptive Beamforming Algorithm Based on Wavelet Packet Transform

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaofei; Xu Dazhuan

    2005-01-01

    An analysis of the received signal of array antennas shows that the received signal has multi-resolution characteristics, and hence the wavelet packet theory can be used to detect the signal. By emplying wavelet packet theory to adaptive beamforming, a wavelet packet transform-based adaptive beamforming algorithm (WP-ABF) is proposed . This WP-ABF algorithm uses wavelet packet transform as the preprocessing, and the wavelet packet transformed signal uses least mean square algorithm to implement the adaptive beamforming. White noise can be wiped off under wavelet packet transform according to the different characteristics of signal and white under the wavelet packet transform. Theoretical analysis and simulations demonstrate that the proposed WP-ABF algorithm converges faster than the conventional adaptive beamforming algorithm and the wavelet transform-based beamforming algorithm. Simulation results also reveal that the convergence of the algorithm relates closely to the wavelet base and series; that is, the algorithm convergence gets better with the increasing of series, and for the same series of wavelet base the convergence gets better with the increasing of regularity.

  19. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed

    2014-11-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  20. A danger-theory-based immune network optimization algorithm.

    Science.gov (United States)

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  1. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  2. A Practical Localization Algorithm Based on Wireless Sensor Networks

    CERN Document Server

    Huang, Tao; Xia, Feng; Jin, Cheng; Li, Liang

    2010-01-01

    Many localization algorithms and systems have been developed by means of wireless sensor networks for both indoor and outdoor environments. To achieve higher localization accuracy, extra hardware equipments are utilized by most of the existing localization algorithms, which increase the cost and greatly limit the range of location-based applications. In this paper we present a method which can effectively meet different localization accuracy requirements of most indoor and outdoor location services in realistic applications. Our algorithm is composed of two phases: partition phase, in which the target region is split into small grids and localization refinement phase in which a higher accuracy location can be generated by applying a trick algorithm. A realistic demo system using our algorithm has been developed to illustrate its feasibility and availability. The results show that our algorithm can improve the localization accuracy.

  3. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  4. Acceleration of Directional Medain Filter Based Deinterlacing Algorithm (DMFD

    Directory of Open Access Journals (Sweden)

    Addanki Purna Ramesh

    2011-12-01

    Full Text Available This paper presents a novel directional median filter based deinterlacing algorithm (DMFD. DMFD is a content adaptive spatial deinterlacing algorithm that finds the direction of the edge and applies the median filtering along the edge to interpolate the odd pixels from the 5 pixels from the upper and 5 pixels from the lower even lines of the field. The proposed algorithm gives a significance improvement of 3db for baboon standard test image that has high textured content compared to CADEM, DOI, and MELA and also gives improved average PSNR compared previous algorithms. The algorithm written and tested in C and ported onto Altera’s NIOS II embedded soft processor and configured in CYCLONE-II FPGA. The ISA of Nios-II processor has extended with two additional instructions for calculation of absolute difference and minimum of four numbers to accelerate the FPGA implementation of the algorithms by 3.2 times

  5. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  6. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  7. Image completion algorithm based on texture synthesis

    Institute of Scientific and Technical Information of China (English)

    Zhang Hongying; Peng Qicong; Wu Yadong

    2007-01-01

    A new algorithm is proposed for completing the missing parts caused by the removal of foreground or background elements from an image of natural scenery in a visually plausible way.The major contributions of the proposed algorithm are: (1) for most natural images, there is a strong orientation of texture or color distribution.So a method is introduced to compute the main direction of the texture and complete the image by limiting the search to one direction to carry out image completion quite fast; (2) there exists a synthesis ordering for image completion.The searching order of the patches is denned to ensure the regions with more known information and the structures should be completed before filling in other regions; (3) to improve the visual effect of texture synthesis, an adaptive scheme is presented to determine the size of the template window for capturing the features of various scales.A number of examples are given to demonstrate the effectiveness of the proposed algorithm.

  8. A Novel Heuristic Algorithm Based on Clark and Wright Algorithm for Green Vehicle Routing Problem

    OpenAIRE

    Mehdi Alinaghian; Zahra Kaviani; Siyavash Khaledan

    2015-01-01

    A significant portion of Gross Domestic Production (GDP) in any country belongs to the transportation system. Transportation equipment, in the other hand, is supposed to be great consumer of oil products. Many attempts have been assigned to the vehicles to cut down Greenhouse Gas (GHG). In this paper a novel heuristic algorithm based on Clark and Wright Algorithm called Green Clark and Wright (GCW) for Vehicle Routing Problem regarding to fuel consumption is presented. The objective function ...

  9. Collaborative Filtering Algorithms Based on Kendall Correlation in Recommender Systems

    Institute of Scientific and Technical Information of China (English)

    YAO Yu; ZHU Shanfeng; CHEN Xinmeng

    2006-01-01

    In this work, Kendall correlation based collaborative filtering algorithms for the recommender systems are proposed. The Kendall correlation method is used to measure the correlation amongst users by means of considering the relative order of the users' ratings. Kendall based algorithm is based upon a more general model and thus could be more widely applied in e-commerce. Another discovery of this work is that the consideration of only positive correlated neighbors in prediction, in both Pearson and Kendall algorithms, achieves higher accuracy than the consideration of all neighbors, with only a small loss of coverage.

  10. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  11. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  12. Road Network Vulnerability Analysis Based on Improved Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunpeng Wang

    2014-01-01

    Full Text Available We present an improved ant colony algorithm-based approach to assess the vulnerability of a road network and identify the critical infrastructures. This approach improves computational efficiency and allows for its applications in large-scale road networks. This research involves defining the vulnerability conception, modeling the traffic utility index and the vulnerability of the road network, and identifying the critical infrastructures of the road network. We apply the approach to a simple test road network and a real road network to verify the methodology. The results show that vulnerability is directly related to traffic demand and increases significantly when the demand approaches capacity. The proposed approach reduces the computational burden and may be applied in large-scale road network analysis. It can be used as a decision-supporting tool for identifying critical infrastructures in transportation planning and management.

  13. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    Science.gov (United States)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  14. Research of Collaborative Filtering Recommendation Algorithm based on Network Structure

    Directory of Open Access Journals (Sweden)

    Hui PENG

    2013-10-01

    Full Text Available This paper combines the classic collaborative filtering algorithm with personalized recommendation algorithm based on network structure. For the data sparsity and malicious behavior problems of traditional collaborative filtering algorithm, the paper introduces a new kind of social network-based collaborative filtering algorithm. In order to improve the accuracy of the personalized recommendation technology, we first define empty state in the state space of multi-dimensional semi-Markov processes and obtain extended multi-dimensional semi-Markov processes which are combined with social network analysis theory, and then we get social network information flow model. The model describes the flow of information between the members of the social network. So, we propose collaborative filtering algorithm based on social network information flow model. The algorithm uses social network information and combines user trust with user interest and find nearest neighbors of the target user and then forms a project recommended to improve the accuracy of recommended. Compared with the traditional collaborative filtering algorithm, the algorithm can effectively alleviate the sparsity and malicious behavior problem, and significantly improve the quality of the recommendation system recommended.

  15. Area Variation Based Color Snake Algorithm for Moving Object Tracking

    Institute of Scientific and Technical Information of China (English)

    Shoum-ik ROYCHOUDHURY; Young-joon HAN

    2010-01-01

    A snake algorithm has been known that it has a strong point in extracting the exact contour of an object.But it is apt to be influenced by scattered edges around the control points.Since the shape of a moving object in 2D image changes a lot due ta its rotation and translation in the 3D space,the conventional algorithm that takes into account slowly moving objects cannot provide an appropriate solution.To utilize the advantages of the snake algrithm while minimizing the drawbacks,this paper proposes the area variation based color snake algorithm for moving object tracking.The proposed algorithm includes a new energy term which is used for preserving the shape of an object between two consecutive inages.The proposed one can also segment precisely interesting objects on complex image since it is based on color information.Experiment results show that the proposed algorithm is very effective in various environments.

  16. CUDA Based Speed Optimization of the PCA Algorithm

    Directory of Open Access Journals (Sweden)

    Salih Görgünoğlu

    2016-05-01

    Full Text Available Principal Component Analysis (PCA is an algorithm involving heavy mathematical operations with matrices. The data extracted from the face images are usually very large and to process this data is time consuming. To reduce the execution time of these operations, parallel programming techniques are used. CUDA is a multipurpose parallel programming architecture supported by graphics cards. In this study we have implemented the PCA algorithm using both the classical programming approach and CUDA based implementation using different configurations. The algorithm is subdivided into its constituent calculation steps and evaluated for the positive effects of parallelization on each step. Therefore, the parts of the algorithm that cannot be improved by parallelization are identified. On the other hand, it is also shown that, with CUDA based approach dramatic improvements in the overall performance of the algorithm arepossible.

  17. Novel algorithm for distributed replicas management based on dynamic programming

    Institute of Scientific and Technical Information of China (English)

    Wang Tao; Lu Xianliang; Hou Mengshu

    2006-01-01

    Replicas can improve the data reliability in distributed system. However, the traditional algorithms for replica management are based on the assumption that all replicas have the uniform reliability, which is inaccurate in some actual systems. To address such problem, a novel algorithm is proposed based on dynamic programming to manage the number and distribution of replicas in different nodes. By using Markov model, replicas management is organized as a multi-phase process, and the recursion equations are provided. In this algorithm, the heterogeneity of nodes, the expense for maintaining replicas and the engaged space have been considered. Under these restricted conditions, this algorithm realizes high data reliability in a distributed system. The results of case analysis prove the feasibility of the algorithm.

  18. Heuristic based data scheduling algorithm for OFDMA wireless network

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A system model based on joint layer mechanism is formulated for optimal data scheduling over fixed point-to-point links in OFDMA ad-hoc wireless networks.A distributed scheduling algorithm (DSA) for system model optimization is proposed that combines the randomly chosen subcarrier according to the channel condition of local subcarriers with link power control to limit interference caused by the reuse of subcarrier among links.For the global fairness improvement of algorithms,a global power control scheduling algorithm (GPCSA) based on the proposed DSA is presented and dynamically allocates global power according to difference between average carrier-noise-ratio of selected local links and system link protection ratio.Simulation results demonstrate that the proposed algorithms achieve better efficiency and fairness compared with other existing algorithms.

  19. Drilling Path Optimization Based on Particle Swarm Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Guangyu; ZHANG Weibo; DU Yuexiang

    2006-01-01

    This paper presents a new approach based on the particle swarm optimization (PSO) algorithm for solving the drilling path optimization problem belonging to discrete space. Because the standard PSO algorithm is not guaranteed to be global convergence or local convergence, based on the mathematical algorithm model, the algorithm is improved by adopting the method of generate the stop evolution particle over again to get the ability of convergence to the global optimization solution. And the operators are improved by establishing the duality transposition method and the handle manner for the elements of the operator, the improved operator can satisfy the need of integer coding in drilling path optimization. The experiment with small node numbers indicates that the improved algorithm has the characteristics of easy realize, fast convergence speed, and better global convergence characteristics, hence the new PSO can play a role in solving the problem of drilling path optimization in drilling holes.

  20. Image Recovery Algorithm Based on Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Xinghui Zhu

    2014-01-01

    Full Text Available We proposed a recovery scheme for image deblurring. The scheme is under the framework of sparse representation and it has three main contributions. Firstly, considering the sparse property of natural image, the nonlocal overcompleted dictionaries are learned for image patches in our scheme. And, then, we coded the patches in each nonlocal clustering with the corresponding learned dictionary to recover the whole latent image. In addition, for some practical applications, we also proposed a method to evaluate the blur kernel to make the algorithm usable in blind image recovery. The experimental results demonstrated that the proposed scheme is competitive with some current state-of-the-art methods.

  1. A Novel Heuristic Algorithm Based on Clark and Wright Algorithm for Green Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Mehdi Alinaghian

    2015-08-01

    Full Text Available A significant portion of Gross Domestic Production (GDP in any country belongs to the transportation system. Transportation equipment, in the other hand, is supposed to be great consumer of oil products. Many attempts have been assigned to the vehicles to cut down Greenhouse Gas (GHG. In this paper a novel heuristic algorithm based on Clark and Wright Algorithm called Green Clark and Wright (GCW for Vehicle Routing Problem regarding to fuel consumption is presented. The objective function is fuel consumption, drivers, and the usage of vehicles. Being compared to exact methods solutions for small-sized problems and to Differential Evolution (DE algorithm solutions for large-scaled problems, the results show efficient performance of the proposed GCW algorithm.

  2. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  3. A MODIFIED ANT-BASED TEXT CLUSTERING ALGORITHM WITH SEMANTIC SIMILARITY MEASURE

    Institute of Scientific and Technical Information of China (English)

    Haoxiang XIA; Shuguang WANG; Taketoshi YOSHIDA

    2006-01-01

    Ant-based text clustering is a promising technique that has attracted great research attention. This paper attempts to improve the standard ant-based text-clustering algorithm in two dimensions. On one hand, the ontology-based semantic similarity measure is used in conjunction with the traditional vector-space-model-based measure to provide more accurate assessment of the similarity between documents. On the other, the ant behavior model is modified to pursue better algorithmic performance.Especially, the ant movement rule is adjusted so as to direct a laden ant toward a dense area of the same type of items as the ant's carrying item, and to direct an unladen ant toward an area that contains an item dissimilar with the surrounding items within its Moore neighborhood. Using WordNet as the base ontology for assessing the semantic similarity between documents, the proposed algorithm is tested with a sample set of documents excerpted from the Reuters-21578 corpus and the experiment results partly indicate that the proposed algorithm perform better than the standard ant-based text-clustering algorithm and the k-means algorithm.

  4. CUDT: A CUDA Based Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Win-Tsung Lo

    2014-01-01

    Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  5. A NOVEL THRESHOLD BASED EDGE DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Y. RAMADEVI,

    2011-06-01

    Full Text Available Image segmentation is the process of partitioning/subdividing a digital image into multiple meaningful regions or sets of pixels regions with respect to a particular application. Edge detection is one of the frequently used techniques in digital image processing. The level to which the subdivision is carried depends on theproblem being viewed. Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. There are many ways to perform edge detection. In this paper different Edge detection methods such as Sobel, Prewitt, Robert, Canny, Laplacian of Gaussian (LOG are used for segmenting the image. Expectation-Maximization (EM algorithm, OSTU and Genetic algorithms are also used. A new edge detection technique is proposed which detects the sharp and accurate edges that are not possible with the existing techniques. The proposed method with different threshold values for given input image is shown that ranges between 0 and 1 and it are observed that when the threshold value is 0.68 the sharp edges are recognised properly.

  6. Efficient Satellite Scheduling Based on Improved Vector Evaluated Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Tengyue Mao

    2012-03-01

    Full Text Available Satellite scheduling is a typical multi-peak, many-valley, nonlinear multi-objective optimization problem. How to effectively implement the satellite scheduling is a crucial research in space areas.This paper mainly discusses the performance of VEGA (Vector Evaluated Genetic Algorithm based on the study of basic principles of VEGA algorithm, algorithm realization and test function, and then improves VEGA algorithm through introducing vector coding, new crossover and mutation operators, new methods to assign fitness and hold good individuals. As a result, the diversity and convergence of improved VEGA algorithm of improved VEGA algorithm have been significantly enhanced and will be applied to Earth-Mars orbit optimization. At the same time, this paper analyzes the results of the improved VEGA, whose results of performance analysis and evaluation show that although VEGA has a profound impact upon multi-objective evolutionary research,  multi-objective evolutionary algorithm on the basis of Pareto seems to be a more effective method to get the non-dominated solutions from the perspective of diversity and convergence of experimental result. Finally, based on Visual C + + integrated development environment, we have implemented improved vector evaluation algorithm in the satellite scheduling.

  7. ROUTING AND WAVELENGTH ASSIGNMENT ALGORITHMS BASED ON EQUIVALENT NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Qi Xiaogang; Liu Lifang; Lin Sanyang

    2006-01-01

    In this paper, a Wavelength Division Multiplexing (WDM) network model based on the equivalent networks is described, and wavelength-dependent equivalent arc, equivalent networks, equivalent multicast tree and some other terms are presented. Based on this model and relevant Routing and Wavelength Assignment (RWA) strategy, a unicast RWA algorithm and a multicast RWA algorithm are presented. The wavelength-dependent equivalent arc expresses the schedule of local RWA and the equivalent network expresses the whole topology of WDM optical networks, so the two algorithms are of the flexibility in RWA and the optimization of the whole problem. The theoretic analysis and simulation results show the two algorithms are of the stronger capability and the lower complexity than the other existing algorithms for RWA problem, and the complexity of the two algorithms are only related to the scale of the equivalent networks. Finally, we prove the two algorithms' feasibility and the one-by-one corresponding relation between the equivalent multicast tree and original multicast tree, and point out the superiorities and drawbacks of the two algorithms respectively.

  8. A novel bit-quad-based Euler number computing algorithm.

    Science.gov (United States)

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  9. [Heart rate measurement algorithm based on artificial intelligence].

    Science.gov (United States)

    Chengxian, Cai; Wei, Wang

    2010-01-01

    Based on the heart rate measurement method using time-lapse image of human cheek, this paper proposes a novel measurement algorithm based on Artificial Intelligence. The algorithm combining with fuzzy logic theory acquires the heart beat point by using the defined fuzzy membership function of each sampled point. As a result, it calculates the heart rate by counting the heart beat points in a certain time period. Experiment shows said algorithm satisfies in operability, accuracy and robustness, which leads to constant practical value.

  10. THE PARALLEL RECURSIVE AP ADAPTIVE ALGORITHM BASED ON VOLTERRA SERIES

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Aiming at the nonlinear system identification problem, a parallel recursive affine projection (AP) adaptive algorithm for the nonlinear system based on Volterra series is presented in this paper. The algorithm identifies in parallel the Volterra kernel of each order, recursively estimate the inverse of the autocorrelation matrix for the Volterra input of each order, and remarkably improve the convergence speed of the identification process compared with the NLMS and conventional AP adaptive algorithm based on Volterra series. Simulation results indicate that the proposed method in this paper is efficient.

  11. Mercer Kernel Based Fuzzy Clustering Self-Adaptive Algorithm

    Institute of Scientific and Technical Information of China (English)

    李侃; 刘玉树

    2004-01-01

    A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.

  12. A Novel Approach to Fast Image Filtering Algorithm of Infrared Images based on Intro Sort Algorithm

    CERN Document Server

    Gupta, Kapil Kumar; Niranjan, Jitendra Kumar

    2012-01-01

    In this study we investigate the fast image filtering algorithm based on Intro sort algorithm and fast noise reduction of infrared images. Main feature of the proposed approach is that no prior knowledge of noise required. It is developed based on Stefan- Boltzmann law and the Fourier law. We also investigate the fast noise reduction approach that has advantage of less computation load. In addition, it can retain edges, details, text information even if the size of the window increases. Intro sort algorithm begins with Quick sort and switches to heap sort when the recursion depth exceeds a level based on the number of elements being sorted. This approach has the advantage of fast noise reduction by reducing the comparison time. It also significantly speed up the noise reduction process and can apply to real-time image processing. This approach will extend the Infrared images applications for medicine and video conferencing.

  13. Analysis of a wavelet-based robust hash algorithm

    Science.gov (United States)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  14. An Event Grouping Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava; Yurukov, Borislav

    2016-01-01

    This paper presents the study of an event grouping based algorithm for a university course timetabling problem. Several publications which discuss the problem and some approaches for its solution are analyzed. The grouping of events in groups with an equal number of events in each group is not applicable to all input data sets. For this reason, a universal approach to all possible groupings of events in commensurate in size groups is proposed here. Also, an implementation of an algorithm base...

  15. An Event Grouping Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava; Yurukov, Borislav

    2016-01-01

    This paper presents the study of an event grouping based algorithm for a university course timetabling problem. Several publications which discuss the problem and some approaches for its solution are analyzed. The grouping of events in groups with an equal number of events in each group is not applicable to all input data sets. For this reason, a universal approach to all possible groupings of events in commensurate in size groups is proposed here. Also, an implementation of an algorithm base...

  16. Differences in estimating terrestrial water flux from three satellite-based Priestley-Taylor algorithms

    Science.gov (United States)

    Yao, Yunjun; Liang, Shunlin; Yu, Jian; Zhao, Shaohua; Lin, Yi; Jia, Kun; Zhang, Xiaotong; Cheng, Jie; Xie, Xianhong; Sun, Liang; Wang, Xuanyu; Zhang, Lilin

    2017-04-01

    Accurate estimates of terrestrial latent heat of evaporation (LE) for different biomes are essential to assess energy, water and carbon cycles. Different satellite- based Priestley-Taylor (PT) algorithms have been developed to estimate LE in different biomes. However, there are still large uncertainties in LE estimates for different PT algorithms. In this study, we evaluated differences in estimating terrestrial water flux in different biomes from three satellite-based PT algorithms using ground-observed data from eight eddy covariance (EC) flux towers of China. The results reveal that large differences in daily LE estimates exist based on EC measurements using three PT algorithms among eight ecosystem types. At the forest (CBS) site, all algorithms demonstrate high performance with low root mean square error (RMSE) (less than 16 W/m2) and high squared correlation coefficient (R2) (more than 0.9). At the village (HHV) site, the ATI-PT algorithm has the lowest RMSE (13.9 W/m2), with bias of 2.7 W/m2 and R2 of 0.66. At the irrigated crop (HHM) site, almost all models algorithms underestimate LE, indicating these algorithms may not capture wet soil evaporation by parameterization of the soil moisture. In contrast, the SM-PT algorithm shows high values of R2 (comparable to those of ATI-PT and VPD-PT) at most other (grass, wetland, desert and Gobi) biomes. There are no obvious differences in seasonal LE estimation using MODIS NDVI and LAI at most sites. However, all meteorological or satellite-based water-related parameters used in the PT algorithm have uncertainties for optimizing water constraints. This analysis highlights the need to improve PT algorithms with regard to water constraints.

  17. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    Science.gov (United States)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  18. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  19. Warehouse Optimization Model Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guofeng Qin

    2013-01-01

    Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.

  20. Evolutionary Algorithm Based on Immune Strategy

    Institute of Scientific and Technical Information of China (English)

    WANG Lei; JIAO Licheng

    2001-01-01

    A novel evolutionary algorithm,evolution-immunity strategies(EIS), is proposed with reference to the immune theory in biology, which constructs an immune operator accomplished by two steps, a vaccination and an immune selection. The aim of introducing the immune concepts and methods into ES is for finding the ways and means obtaining the optimal solution of difficult problems with locally characteristic information. The detail processes of realizing EIS are presented which contain 6 steps. EIS is analyzed with Markovian theory and it is approved to be convergent with probability 1. In EIS, an immune operator is an aggregation of specific operations and procedures, and methods of selecting vaccines and constructing an immune operator are given in this paper. It is shown with an example of the 442-city TSP that the EIS can restrain the degenerate phenomenon during the evolutionary process by simulated calculating result, improve the searching capability and efficiency, and therefore, increase the convergent speed greatly.

  1. RELAY ALGORITHM BASED ON NETWORK CODING IN WIRELESS LOCAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    Wang Qi; Wang Qingshan; Wang Dongxue

    2013-01-01

    The network coding is a new technology in the field of information in 21st century.It could enhance the network throughput and save the energy consumption,and is mainly based on the single transmission rate.However,with the development of wireless network and equipment,wireless local network MAC protocols have already supported the multi-rate transmission.This paper investigates the optimal relay selection problem based on network coding.Firstly,the problem is formulated as an optimization problem.Moreover,a relay algorithm based on network coding is proposed and the transmission time gain of our algorithm over the traditional relay algorithm is analyzed.Lastly,we compare total transmission time and the energy consumption of our proposed algorithm,Network Coding with Relay Assistance (NCRA),Transmission Request (TR),and the Direct Transmission (DT) without relay algorithm by adopting IEEE 802.11b.The simulation results demonstrate that our algorithm that improves the coding opportunity by the cooperation of the relay nodes leads to the transmission time decrease of up to 17% over the traditional relay algorithms.

  2. Haplotyping a single triploid individual based on genetic algorithm.

    Science.gov (United States)

    Wu, Jingli; Chen, Xixi; Li, Xianchen

    2014-01-01

    The minimum error correction model is an important combinatorial model for haplotyping a single individual. In this article, triploid individual haplotype reconstruction problem is studied by using the model. A genetic algorithm based method GTIHR is presented for reconstructing the triploid individual haplotype. A novel coding method and an effectual hill-climbing operator are introduced for the GTIHR algorithm. This relatively short chromosome code can lead to a smaller solution space, which plays a positive role in speeding up the convergence process. The hill-climbing operator ensures algorithm GTIHR converge at a good solution quickly, and prevents premature convergence simultaneously. The experimental results prove that algorithm GTIHR can be implemented efficiently, and can get higher reconstruction rate than previous algorithms.

  3. Distribution network planning algorithm based on Hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    GAO Wei-xin; LUO Xian-jue

    2005-01-01

    This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.

  4. Switching Equalization Algorithm Based on a New SNR Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is well-known that turbo equalization with the max-log-map (MLM) rather than the log-map (LM) algorithm is insensitive to signal to noise ratio (SNR) mismatch. As our first contribution, an improved MLM algorithm called scaled max-log-map (SMLM) algorithm is presented. Simulation results show that the SMLM scheme can dramatically outperform the MLM without sacrificing the robustness against SNR mismatch. Unfortunately, its performance is still inferior to that of the LM algorithm with exact SNR knowledge over the class of high-loss channels. As our second contribution, a switching turbo equalization scheme, which switches between the SMLM and LM schemes, is proposed to practically close the performance gap. It is based on a novel way to estimate the SNR from the reliability values of the extrinsic information of the SMLM algorithm.

  5. A SAR Back Projection Autofocusing Algorithm Based on Legendre Approximation

    Directory of Open Access Journals (Sweden)

    Gao Yang

    2014-06-01

    Full Text Available The Back Projection (BP algorithm is a very important time-domain methodology for Synthetic Aperture Radar (SAR imaging. However, conventional autofocus techniques are based on frequency-domain imaging algorithms, and can not be directly applied to BP imagery for error phase estimation. In this paper, an autofocus algorithm for BP imagery is proposed. The algorithm takes image sharpness as an objective function, and employs the coordinate descent optimization scheme to obtain the optimum phase-corrected variables by iterations. In the implementation, with a Legendre approximation of the objective function, the optimal phase estimation can be found analytically for each parameter within an iteration, avoiding computationally expensive line-search procedures. The experimental results with both simulated and measured data confirm the accuracy and effectiveness of the proposed algorithm.

  6. SIMULATED ANNEALING BASED POLYNOMIAL TIME QOS ROUTING ALGORITHM FOR MANETS

    Institute of Scientific and Technical Information of China (English)

    Liu Lianggui; Feng Guangzeng

    2006-01-01

    Multi-constrained Quality-of-Service (QoS) routing is a big challenge for Mobile Ad hoc Networks (MANETs) where the topology may change constantly. In this paper a novel QoS Routing Algorithm based on Simulated Annealing (SA_RA) is proposed. This algorithm first uses an energy function to translate multiple QoS weights into a single mixed metric and then seeks to find a feasible path by simulated annealing. The paper outlines simulated annealing algorithm and analyzes the problems met when we apply it to Qos Routing (QoSR) in MANETs. Theoretical analysis and experiment results demonstrate that the proposed method is an effective approximation algorithms showing better performance than the other pertinent algorithm in seeking the (approximate) optimal configuration within a period of polynomial time.

  7. Distortion Parameters Analysis Method Based on Improved Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    ZHANG Shutuan

    2013-10-01

    Full Text Available In order to realize the accurate distortion parameters test of aircraft power supply system, and satisfy the requirement of corresponding equipment in the aircraft, the novel power parameters test system based on improved filtering algorithm is introduced in this paper. The hardware of the test system has the characters of s portable and high-speed data acquisition and processing, and the software parts utilize the software Labwindows/CVI as exploitation software, and adopt the pre-processing technique and adding filtering algorithm. Compare with the traditional filtering algorithm, the test system adopted improved filtering algorithm can help to increase the test accuracy. The application shows that the test system with improved filtering algorithm can realize the accurate test results, and reach to the design requirements.  

  8. Earth Observation Satellites Scheduling Based on Decomposition Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Feng Yao

    2010-11-01

    Full Text Available A decomposition-based optimization algorithm was proposed for solving Earth Observation Satellites scheduling problem. The problem was decomposed into task assignment main problem and single satellite scheduling sub-problem. In task assignment phase, the tasks were allocated to the satellites, and each satellite would schedule the task respectively in single satellite scheduling phase. We adopted an adaptive ant colony optimization algorithm to search the optimal task assignment scheme. Adaptive parameter adjusting strategy and pheromone trail smoothing strategy were introduced to balance the exploration and the exploitation of search process. A heuristic algorithm and a very fast simulated annealing algorithm were proposed to solve the single satellite scheduling problem. The task assignment scheme was valued by integrating the observation scheduling result of multiple satellites. The result was responded to the ant colony optimization algorithm, which can guide the search process of ant colony optimization. Computation results showed that the approach was effective to the satellites observation scheduling problem.

  9. Mobile robot dynamic path planning based on improved genetic algorithm

    Science.gov (United States)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  10. Multiparty Quantum Key Agreement Based on Quantum Search Algorithm.

    Science.gov (United States)

    Cao, Hao; Ma, Wenping

    2017-03-23

    Quantum key agreement is an important topic that the shared key must be negotiated equally by all participants, and any nontrivial subset of participants cannot fully determine the shared key. To date, the embed modes of subkey in all the previously proposed quantum key agreement protocols are based on either BB84 or entangled states. The research of the quantum key agreement protocol based on quantum search algorithms is still blank. In this paper, on the basis of investigating the properties of quantum search algorithms, we propose the first quantum key agreement protocol whose embed mode of subkey is based on a quantum search algorithm known as Grover's algorithm. A novel example of protocols with 5 - party is presented. The efficiency analysis shows that our protocol is prior to existing MQKA protocols. Furthermore it is secure against both external attack and internal attacks.

  11. A New Generalized Similarity-Based Topic Distillation Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hongfang; DANG Xiaohui

    2007-01-01

    The procedure of hypertext induced topic search based on a semantic relation model is analyzed, and the reason for the topic drift of HITS algorithm was found to prove that Web pages are projected to a wrong latent semantic basis. A new concept-generalized similarity is introduced and, based on this, a new topic distillation algorithm GSTDA(generalized similarity based topic distillation algorithm) was presented to improve the quality of topic distillation. GSTDA was applied not only to avoid the topic drift, but also to explore relative topics to user query. The experimental results on 10 queries show that GSTDA reduces topic drift rate by 10% to 58% compared to that of HITS(hypertext induced topic search) algorithm, and discovers several relative topics to queries that have multiple meanings.

  12. Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology

    Directory of Open Access Journals (Sweden)

    Guanghua Zhang

    2016-09-01

    Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.

  13. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  14. The Algorithm for Rule-base Refinement on Fuzzy Set

    Institute of Scientific and Technical Information of China (English)

    LI Feng; WU Cui-hong; DING Xiang-wu

    2006-01-01

    In the course of running an artificial intelligent system many redundant rules are often produced. To refine the knowledge base, viz. to remove the redundant rules, can accelerate the reasoning and shrink the rule base. The purpose of the paper is to present the thinking on the topic and design the algorithm to remove the redundant rules from the rule base.The "abstraction" of "state variable", redundant rules and the least rule base are discussed in the paper. The algorithm on refining knowledge base is also presented.

  15. A multi-objective improved teaching-learning based optimization algorithm for unconstrained and constrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2014-01-01

    Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.

  16. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    Science.gov (United States)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  17. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.

    Science.gov (United States)

    Sánchez, Clara I; Hornero, Roberto; López, María I; Aboy, Mateo; Poza, Jesús; Abásolo, Daniel

    2008-04-01

    We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%).

  18. Knowledge Based Economy Assessment

    Directory of Open Access Journals (Sweden)

    Madalina Cristina Tocan

    2012-12-01

    Full Text Available The importance of knowledge-based economy (KBE in the XXI century isevident. In the article the reflection of knowledge on economy is analyzed. The main point is targeted to the analysis of characteristics of knowledge expression in economy and to the construction of structure of KBE expression. This allows understanding the mechanism of functioning of knowledge economy. Theauthors highlight the possibility to assess the penetration level of KBE which could manifest itself trough the existence of products of knowledge expression which could be created in acquisition, creation, usage and development of them. The latter phenomenon is interpreted as knowledge expression characteristics: economic and social context, human resources, ICT, innovative business and innovation policy. The reason for this analysis was based on the idea that in spite of the knowledge economy existence in all developed World countries adefinitive, universal list of indicators for mapping and measuring the KBE does not yet exists. Knowledge Expression Assessment Models are presented in the article.

  19. Assessment Guidelines for Ant Colony Algorithms when Solving Quadratic Assignment Problems

    Science.gov (United States)

    See, Phen Chiak; Yew Wong, Kuan; Komarudin, Komarudin

    2009-08-01

    To date, no consensus exists on how to evaluate the performance of a new Ant Colony Optimization (ACO) algorithm when solving Quadratic Assignment Problems (QAPs). Different performance measures and problems sets are used by researchers to evaluate their algorithms. This paper is aimed to provide a recapitulation of the relevant issues and suggest some guidelines for assessing the performance of new ACO algorithms.

  20. Presentation of a general algorithm for effect-assessment on secondary poisoning. II Terrestrial food chains

    NARCIS (Netherlands)

    Romijn CAFM; Luttik R; Slooff W; Canton JH

    1991-01-01

    In an earlier report, a simple algorithm for effect-assessment on secondary poisoning of birds and mammals was presented. This algorithm (MAR = NOEC/BCF) was drawn up by analyzing an aquatic food chain. In the present study it was tested whether this algorithm can be used equally well for effect-a

  1. Cooperation-based Ant Colony Algorithm in WSN

    Directory of Open Access Journals (Sweden)

    Jianbin Xue

    2013-04-01

    Full Text Available This paper proposed a routing algorithm based on ant colony algorithm. The traditional ant colony algorithm updates pheromone according to the path length, to get the shortest path from the initial node to destination node. But MIMO system is different from the SISO system. The distance is farther but the energy is not bigger. Similarly, the closer the distance, the smaller the energy is not necessarily. So need to select the path according to the energy consumption of the path. This paper is based on the energy consumption to update the pheromone which from the cluster head node to the next hop node. Then, can find a path which the communication energy consumption is least. This algorithm can save more energy consumption of the network. The simulation results of MATLAB show that the path chosen by the algorithm is better than the simple ant colony algorithm, and the algorithm can save the network energy consumption better and can prolong the life cycle of the network.

  2. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  3. Phase shift extraction algorithm based on Euclidean matrix norm.

    Science.gov (United States)

    Deng, Jian; Wang, Hankun; Zhang, Desi; Zhong, Liyun; Fan, Jinping; Lu, Xiaoxu

    2013-05-01

    In this Letter, the character of Euclidean matrix norm (EMN) of the intensity difference between phase-shifting interferograms, which changes in sinusoidal form with the phase shifts, is presented. Based on this character, an EMN phase shift extraction algorithm is proposed. Both the simulation calculation and experimental research show that the phase shifts with high precision can be determined with the proposed EMN algorithm easily. Importantly, the proposed EMN algorithm will supply a powerful tool for the rapid calibration of the phase shifts.

  4. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    Science.gov (United States)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  5. Manipulator Neural Network Control Based on Fuzzy Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The three-layer forward neural networks are used to establish the inverse kinem a tics models of robot manipulators. The fuzzy genetic algorithm based on the line ar scaling of the fitness value is presented to update the weights of neural net works. To increase the search speed of the algorithm, the crossover probability and the mutation probability are adjusted through fuzzy control and the fitness is modified by the linear scaling method in FGA. Simulations show that the propo sed method improves considerably the precision of the inverse kinematics solutio ns for robot manipulators and guarantees a rapid global convergence and overcome s the drawbacks of SGA and the BP algorithm.

  6. A novel image encryption algorithm based on DNA subsequence operation.

    Science.gov (United States)

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  7. A motion retargeting algorithm based on model simplification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.

  8. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  9. A Secure Watermarking Algorithm Based on Coupled Map Lattice

    Institute of Scientific and Technical Information of China (English)

    YI Xiang; WANG Wei-ran

    2005-01-01

    Based on the nonlinear theory, a secure watermarking algorithm using wavelet transform and coupled map lattice is presented. The chaos is sensitive to initial conditions and has a good non-relevant correlation property, but the finite precision effect limits its application in practical digital watermarking system. To overcome the undesirable short period of chaos mapping and improve the security level of watermarking, the hyper-chaotic sequence is adopted in this algorithm. The watermark is mixed with the hyper-chaotic sequence and embedded in the wavelet domain of the host image. Experimental results and analysis are given to demonstrate that the proposed watermarking algorithm is transparent, robust and secure.

  10. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Science.gov (United States)

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  11. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  12. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Ahmed T. Sadiq Al-Obaidi

    2014-03-01

    Full Text Available This paper presents a new version of Tabu Search (TS based on Cuckoo Search (CS called (Tabu-Cuckoo Search TCS to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  13. Heuristic-based scheduling algorithm for high level synthesis

    Science.gov (United States)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  14. A Learning Algorithm based on High School Teaching Wisdom

    CERN Document Server

    Philip, Ninan Sajeeth

    2010-01-01

    A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate a student and to give them training on the examples for which they repeatedly fail, until, they can correctly answer all types of questions. This incremental learning procedure produces better learning curves by demanding the student to optimally dedicate their learning time on the failed examples. When used in machine learning, the algorithm is found to train a machine on a data with maximum variance in the feature space so that the generalization ability of the network improves. The algorithm has interesting applications in data mining, model evaluations and rare objects discovery.

  15. Rate control algorithm based on frame complexity estimation for MVC

    Science.gov (United States)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  16. FAST UPDATE ALGORITHM FOR TCAM-BASED ROUTING LOOKUPS

    Institute of Scientific and Technical Information of China (English)

    王志恒; 叶强; 白英彩

    2002-01-01

    Routing technology has been forced to evolve towards higher capacity and per-port packet processing speed. The ability to achieve high forwarding speed is due to either software or hardware technology. TCAM (Ternary Content Addressable Memory) provides a performance advantage over other software or hardware search algorithms, often resulting in an order-of-magnitude reduction of search time. But slow updates may affect the performance of TCAM-based routing lookup. So the key is to design a table management algorithm, which supports high-speed updates in TCAMs. This paper presented three table management algorithms, and then compared their performance. Finally, the optimal one after comparing was given.

  17. Performance evaluation of a texture-based segmentation algorithm

    Science.gov (United States)

    Sadjadi, Firooz A.

    1991-07-01

    Texture segmentations are crucial components of many remote sensing, scene analysis, and object recognition systems. However, very little attention has been paid to the problem of performance evaluation in the numerous algorithms that have been proposed by the image understanding community. In this paper, a particular algorithm is introduced and its performance is evaluated in a systematic manner on a wide range of scene and scenarios. Both the algorithm and the methodology used in its evaluation have significance in numerous applications in the computer-based image understanding field.

  18. Automatic Image Registration Algorithm Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LIU Qiong; NI Guo-qiang

    2006-01-01

    An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.

  19. Clonal Strategy Algorithm Based on the Immune Memory

    Institute of Scientific and Technical Information of China (English)

    Ruo-Chen Liu; Li-Cheng Jiao; Hai-Feng Du

    2005-01-01

    Based on the clonal selection theory and immune memory mechanism in the natural immune system, a novel artificial immune system algorithm, Clonal Strategy Algorithm based on the Immune Memory (CSAIM), is proposed in this paper. The algorithm realizes the evolution of antibody population and the evolution of memory unit at the same time, and by using clonal selection operator, the global optimal computation can be combined with the local searching. According to antibody-antibody (Ab-Ab) affinity and antibody-antigen (Ab-Ag) affinity, the algorithm can allot adaptively the scales of memory unit and antibody population. It is proved theoretically that CSAIM is convergent with probability 1. And with the computer simulations of eight benchmark functions and one instance of traveling salesman problem (TSP), it is shown that CSAIM has strong abilities in having high convergence speed, enhancing the diversity of the population and avoiding the premature convergence to some extent.

  20. A novel iris segmentation algorithm based on small eigenvalue analysis

    Science.gov (United States)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  1. Image Encryption Algorithm Based on Chaotic Economic Model

    Directory of Open Access Journals (Sweden)

    S. S. Askar

    2015-01-01

    Full Text Available In literature, chaotic economic systems have got much attention because of their complex dynamic behaviors such as bifurcation and chaos. Recently, a few researches on the usage of these systems in cryptographic algorithms have been conducted. In this paper, a new image encryption algorithm based on a chaotic economic map is proposed. An implementation of the proposed algorithm on a plain image based on the chaotic map is performed. The obtained results show that the proposed algorithm can successfully encrypt and decrypt the images with the same security keys. The security analysis is encouraging and shows that the encrypted images have good information entropy and very low correlation coefficients and the distribution of the gray values of the encrypted image has random-like behavior.

  2. Cryptanalysis of an image encryption algorithm based on DNA encoding

    Science.gov (United States)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2017-10-01

    Recently an image encryption algorithm based on DNA encoding and the Elliptic Curve Cryptography (ECC) is proposed. This paper aims to investigate the security the DNA-based image encryption algorithm and its resistance against chosen plaintext attack. The results of the analysis demonstrate that security of the algorithm mainly relies on one static shuffling step, with a simple confusion operation. In this study, a practical plain image recovery method is proposed, and it is shown that the images encrypted with the same key could easily be recovered using the suggested cryptanalysis method with as low as two chosen plain images. Also, a strategy to improve the security of the algorithm is presented in this paper.

  3. A Modularity Degree Based Heuristic Community Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Dongming Chen

    2014-01-01

    Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.

  4. Efficient mining of association rules based on gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    Fariba Khademolghorani

    2011-07-01

    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  5. An Algorithm for Making Value-Based Strategic Decisions

    Directory of Open Access Journals (Sweden)

    Victor P. Palamarchuk

    2014-01-01

    Full Text Available The latest tendency has been featuring an increase in the number of Russian companies that follow the principles of Value-Based Management(VBM, which is essentially a synergetic combination of corporate finance and strategic management. Strategic decisions are the principal driving force of a company‟s value growth. Thus, the situation calls for an understanding and adequate evaluation of correlation between changes in a company‟s value and strategic decisions. A key to such understanding lies in accurate definition and differentiation of such notions as “a company‟s fair value” and “a company‟s investment value”. The paper contains an analysis of these fundamental definitions for appraisal of a business, which further serves as a basis for making strategic value-based decisions. The suggested algorithm to control a company‟s value substantiates the following:  Logic and procedure for preparation and implementation of strategic decisions;  Differentiation and interrelation between strategic and operational decisions in a company‟s value-based management;  Expediency and conditions for use of two intrinsically- different approaches to strategic decision-making (namely, the creative approach and the trade-off approach;  Approaches to financial assessment and modeling of strategic decisions.

  6. Algorithmic Algebraic Combinatorics and Gröbner Bases

    CERN Document Server

    Klin, Mikhail; Jurisic, Aleksandar

    2009-01-01

    This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM

  7. QRS Detection Based on an Advanced Multilevel Algorithm

    OpenAIRE

    Wissam Jenkal; Rachid Latif; Ahmed Toumanari; Azzedine Dliou; Oussama El B’charri; Fadel Mrabih Rabou Maoulainine

    2016-01-01

    This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system ...

  8. Free Search Algorithm Based Estimation in WSN Location

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hui; LI Dan-mei; SHAO Shi-huang; XU Chen

    2009-01-01

    This paper proposes a novel intelligent estimation algorithm in Wireless Sensor Network nodes location based on Free Search, which converts parameter estimation to on-line optimization of nonlinear function and estimates the coordinates of senor nodes using the Free Search optimization. Compared to the least-squares estimation algorithms, the localization accuracy has been increased significantly, which has been verified by the simulation results.

  9. T-Algorithm-Based Logic Simulation on Distributed Systems

    OpenAIRE

    Sundaram, S; Patnaik, LM

    1992-01-01

    Increase in the complexity of VLSI digital circuit it sign demands faster logic simulation techniques than those currently available. One of the ways of speeding up existing logic simulataon algorithms is by exploiting the inherent parallelism an the sequentaal versaon. In this paper, we explore the possibility of mapping a T-algoriihm based logac samulataon algorithm onto a cluster of workstation interconnected by an ethernet. The set of gates at a particular level as partitioned by the hias...

  10. PEA: Polymorphic Encryption Algorithm based on quantum computation

    OpenAIRE

    Komninos, N.; Mantas, G.

    2011-01-01

    In this paper, a polymorphic encryption algorithm (PEA), based on basic quantum computations, is proposed for the encryption of binary bits. PEA is a symmetric key encryption algorithm that applies different combinations of quantum gates to encrypt binary bits. PEA is also polymorphic since the states of the shared secret key control the different combinations of the ciphertext. It is shown that PEA achieves perfect secrecy and is resilient to eavesdropping and Trojan horse attacks. A securit...

  11. A genetic-algorithm approach for assessing the liquefaction potential of sandy soils

    Directory of Open Access Journals (Sweden)

    G. Sen

    2010-04-01

    Full Text Available The determination of liquefaction potential is required to take into account a large number of parameters, which creates a complex nonlinear structure of the liquefaction phenomenon. The conventional methods rely on simple statistical and empirical relations or charts. However, they cannot characterise these complexities. Genetic algorithms are suited to solve these types of problems. A genetic algorithm-based model has been developed to determine the liquefaction potential by confirming Cone Penetration Test datasets derived from case studies of sandy soils. Software has been developed that uses genetic algorithms for the parameter selection and assessment of liquefaction potential. Then several estimation functions for the assessment of a Liquefaction Index have been generated from the dataset. The generated Liquefaction Index estimation functions were evaluated by assessing the training and test data. The suggested formulation estimates the liquefaction occurrence with significant accuracy. Besides, the parametric study on the liquefaction index curves shows a good relation with the physical behaviour. The total number of misestimated cases was only 7.8% for the proposed method, which is quite low when compared to another commonly used method.

  12. Sampling-based Algorithms for Optimal Motion Planning

    CERN Document Server

    Karaman, Sertac

    2011-01-01

    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically opti...

  13. Face detection based on multiple kernel learning algorithm

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  14. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  15. Majorization-minimization algorithms for wavelet-based image restoration.

    Science.gov (United States)

    Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D

    2007-12-01

    Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.

  16. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    Directory of Open Access Journals (Sweden)

    Hiroki Yoshioka

    2011-07-01

    Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.

  17. A Novel Algorithm Based on 3D-MUSIC Algorithm for Localizing Near-Field Source

    Institute of Scientific and Technical Information of China (English)

    SHAN Zhi-yong; ZHOU Xi-lang; PEN Gen-jiang

    2005-01-01

    A novel 3-D MUSIC algorithm based on the classical 3D-MUSIC algorithm for the location of near-field source was presented. Under the far-field assumption of actual near-field, two algebraic relations of the location parameters between the actual near-field sources and the far-field ones were derived. With Fourier transformation and polynomial-root methods, the elevation and the azimuth of the far-field were obtained, the tracking paths can be developed, and the location parameters of the near-field source can be determined, then the more accurate results can be estimated using an optimization method. The computer simulation results p rove that the algorithm for the location of the near-fields is more accurate, effective and suitable for real-time applications.

  18. A Color Image Edge Detection Algorithm Based on Color Difference

    Science.gov (United States)

    Zhuo, Li; Hu, Xiaochen; Jiang, Liying; Zhang, Jing

    2016-12-01

    Although image edge detection algorithms have been widely applied in image processing, the existing algorithms still face two important problems. On one hand, to restrain the interference of noise, smoothing filters are generally exploited in the existing algorithms, resulting in loss of significant edges. On the other hand, since the existing algorithms are sensitive to noise, many noisy edges are usually detected, which will disturb the subsequent processing. Therefore, a color image edge detection algorithm based on color difference is proposed in this paper. Firstly, a new operation called color separation is defined in this paper, which can reflect the information of color difference. Then, for the neighborhood of each pixel, color separations are calculated in four different directions to detect the edges. Experimental results on natural and synthetic images show that the proposed algorithm can remove a large number of noisy edges and be robust to the smoothing filters. Furthermore, the proposed edge detection algorithm is applied in road foreground segmentation and shadow removal, which achieves good performances.

  19. A face recognition algorithm based on thermal and visible data

    Science.gov (United States)

    Sochenkov, Ilya; Tihonkih, Dmitrii; Vokhmintcev, Aleksandr; Melnikov, Andrey; Makovetskii, Artyom

    2016-09-01

    In this work we present an algorithm of fusing thermal infrared and visible imagery to identify persons. The proposed face recognition method contains several components. In particular this is rigid body image registration. The rigid registration is achieved by a modified variant of the iterative closest point (ICP) algorithm. We consider an affine transformation in three-dimensional space that preserves the angles between the lines. An algorithm of matching is inspirited by the recent results of neurophysiology of vision. Also we consider the ICP minimizing error metric stage for the case of an arbitrary affine transformation. Our face recognition algorithm also uses the localized-contouring algorithms to segment the subject's face; thermal matching based on partial least squares discriminant analysis. Thermal imagery face recognition methods are advantageous when there is no control over illumination or for detecting disguised faces. The proposed algorithm leads to good matching accuracies for different person recognition scenarios (near infrared, far infrared, thermal infrared, viewed sketch). The performance of the proposed face recognition algorithm in real indoor environments is presented and discussed.

  20. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  1. Genetic Algorithm based PID controller for Frequency Regulation Ancillary services

    Directory of Open Access Journals (Sweden)

    Sandeep Bhongade

    2010-12-01

    Full Text Available In this paper, the parameters of Proportional, Integral and Derivative (PID controller for Automatic Generation Control (AGC suitable in restructured power system is tuned according to Generic Algorithms (GAs based performance indices. The key idea of the proposed method is to use the fitness function based on Area Control Error (ACE. The functioning of the proposed Genetic Algorithm based PID (GAPID controller has been demonstrated on a 75-bus Indian power system network and the results have been compared with those obtained by using Least Square Minimization method.

  2. Application of genetic algorithm to hexagon-based motion estimation.

    Science.gov (United States)

    Kung, Chih-Ming; Cheng, Wan-Shu; Jeng, Jyh-Horng

    2014-01-01

    With the improvement of science and technology, the development of the network, and the exploitation of the HDTV, the demands of audio and video become more and more important. Depending on the video coding technology would be the solution for achieving these requirements. Motion estimation, which removes the redundancy in video frames, plays an important role in the video coding. Therefore, many experts devote themselves to the issues. The existing fast algorithms rely on the assumption that the matching error decreases monotonically as the searched point moves closer to the global optimum. However, genetic algorithm is not fundamentally limited to this restriction. The character would help the proposed scheme to search the mean square error closer to the algorithm of full search than those fast algorithms. The aim of this paper is to propose a new technique which focuses on combing the hexagon-based search algorithm, which is faster than diamond search, and genetic algorithm. Experiments are performed to demonstrate the encoding speed and accuracy of hexagon-based search pattern method and proposed method.

  3. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  4. Face Recognition Algorithms Based on Transformed Shape Features

    Directory of Open Access Journals (Sweden)

    Sambhunath Biswas

    2012-05-01

    Full Text Available Human face recognition is, indeed, a challenging task, especially under illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA, while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior.

  5. Research on Quantum Searching Algorithms Based on Phase Shifts

    Institute of Scientific and Technical Information of China (English)

    ZHONG Pu-Cha; BAO Wan-Su

    2008-01-01

    @@ One iterative in Grover's original quantum search algorithm consists of two Hadamard-Walsh transformations, a selective amplitude inversion and a diffusion amplitude inversion. We concentrate on the relation among the probability of success of the algorithm, the phase shifts, the number of target items and the number of iterations via replacing the two amplitude inversions by phase shifts of an arbitrary φ = ψ(0 ≤φ, ψ≤ 2π). Then, according to the relation we find out the optimal phase shifts when the number of iterations is given. We present a new quantum search algorithm based on the optimal phase shifts of 1.018 after 0.5π /√M/N iterations. The new algorithm can obtain either a single target item or multiple target items in the search space with the probability of success at least 93.43%.

  6. Electronic Commerce Logistics Network Optimization Based on Swarm Intelligent Algorithm

    Directory of Open Access Journals (Sweden)

    Yabing Jiao

    2013-09-01

    Full Text Available This article establish an efficient electronic commerce logistics operation system to reduce distribution costs and build a logistics network operation model based on around the B2C electronic commerce enterprise logistics network operation system. B2C electronic commerce transactions features in the enterprise network platform. To solve the NP-hard problem this article use hybrid ant colony algorithm, particle swarm algorithm and group swarm intelligence algorithm to get a best solution. According to the intelligent algorithm, design of electronic commerce logistics network optimization system, enter the national 22 electronic commerce logistics network for validation. Through the experiment to verify the optimized logistics cost greatly decreased. This research can help B2C electronic commerce enterprise logistics network to optimize decision-making under the premise of ensuring the interests of consumers and service levels also can be an effective way for enterprises to improve the efficiency of logistics services and reduce operation costs

  7. PCNN document segmentation method based on bacterial foraging optimization algorithm

    Science.gov (United States)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  8. A layer reduction based community detection algorithm on multiplex networks

    Science.gov (United States)

    Wang, Xiaodong; Liu, Jing

    2017-04-01

    Detecting hidden communities is important for the analysis of complex networks. However, many algorithms have been designed for single layer networks (SLNs) while just a few approaches have been designed for multiplex networks (MNs). In this paper, we propose an algorithm based on layer reduction for detecting communities on MNs, which is termed as LRCD-MNs. First, we improve a layer reduction algorithm termed as neighaggre to combine similar layers and keep others separated. Then, we use neighaggre to find the community structure hidden in MNs. Experiments on real-life networks show that neighaggre can obtain higher relative entropy than the other algorithm. Moreover, we apply LRCD-MNs on some real-life and synthetic multiplex networks and the results demonstrate that, although LRCD-MNs does not have the advantage in terms of modularity, it can obtain higher values of surprise, which is used to evaluate the quality of partitions of a network.

  9. Knowledge Template Based Multi-perspective Car Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Cai

    2010-12-01

    Full Text Available In order to solve the problem due to the vehicle-oriented society such as traffic jam or traffic accident, intelligent transportation system(ITS is raised and become scientist’s research focus, with the purpose of giving people better and safer driving condition and assistance. The core of intelligent transport system is the vehicle recognition and detection, and it’s the prerequisites for other related problems. Many existing vehicle recognition algorithms are aiming at one specific direction perspective, mostly front/back and side view. To make the algorithm more robust, our paper raised a vehicle recognition algorithm for oblique vehicles while also do research on front/back and side ones. The algorithm is designed based on the common knowledge of the car, such as shape, structure and so on. The experimental results of many car images show that our method has fine accuracy in car recognition.

  10. Meteosat Images Encryption based on AES and RSA Algorithms

    Directory of Open Access Journals (Sweden)

    Boukhatem Mohammed Belkaid

    2015-06-01

    Full Text Available Satellite image Security is playing a vital role in the field of communication system and Internet. This work is interested in securing transmission of Meteosat images on the Internet, in public or local networks. To enhance the security of Meteosat transmission in network communication, a hybrid encryption algorithm based on Advanced Encryption Standard (AES and Rivest Shamir Adleman (RSA algorithms is proposed. AES algorithm is used for data transmission because of its higher efficiency in block encryption and RSA algorithm is used for the encryption of the key of the AES because of its management advantages in key cipher. Our encryption system generates a unique password every new session of encryption. Cryptanalysis and various experiments have been carried out and the results were reported in this paper, which demonstrate the feasibility and flexibility of the proposed scheme.

  11. Target Image Matching Algorithm Based on Binocular CCD Ranging

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2014-01-01

    Full Text Available This paper proposed target image in a subpixel level matching algorithm for binocular CCD ranging, which is based on the principle of binocular CCD ranging. In the paper, firstly, we introduced the ranging principle of the binocular ranging system and deduced a binocular parallax formula. Secondly, we deduced the algorithm which was named improved cross-correlation matching algorithm and cubic surface fitting algorithm for target images matched, and it could achieve a subpixel level matching for binocular CCD ranging images. Lastly, through experiment we have analyzed and verified the actual CCD ranging images, then analyzed the errors of the experimental results and corrected the formula of calculating system errors. Experimental results showed that the actual measurement accuracy of a target within 3 km was higher than 0.52%, which meet the accuracy requirements of the high precision binocular ranging.

  12. Validation of a Bayesian-based isotope identification algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, C.J.; Stinnett, J., E-mail: stinnettjacob@gmail.com

    2015-06-01

    Handheld radio-isotope identifiers (RIIDs) are widely used in Homeland Security and other nuclear safety applications. However, most commercially available devices have serious problems in their ability to correctly identify isotopes. It has been reported that this flaw is largely due to the overly simplistic identification algorithms on-board the RIIDs. This paper reports on the experimental validation of a new isotope identification algorithm using a Bayesian statistics approach to identify the source while allowing for calibration drift and unknown shielding. We present here results on further testing of this algorithm and a study on the observed variation in the gamma peak energies and areas from a wavelet-based peak identification algorithm.

  13. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  14. A Load Balance Routing Algorithm Based on Uneven Clustering

    Directory of Open Access Journals (Sweden)

    Liang Yuan

    2013-10-01

    Full Text Available Aiming at the problem of uneven load in clustering Wireless Sensor Network (WSN, a kind of load balance routing algorithm based on uneven clustering is proposed to do uneven clustering and calculate optimal number of clustering. This algorithm prevents the number of common node under some certain cluster head from being too large which leads load to be overweight to death through even node clustering. It constructs evaluation function which can better reflect residual energy distribution of nodes and at the same time constructs routing evaluation function between cluster heads which uses MATLAB to do simulation on the performance of this algorithm. Simulation result shows that the routing established by this algorithm effectively improves network’s energy balance and lengthens the life cycle of network.  

  15. Dynamic Obfuscation Algorithm based on Demand-Driven Symbolic Execution

    Directory of Open Access Journals (Sweden)

    Yubo Yang

    2014-06-01

    Full Text Available Dynamic code obfuscation technique increases the difficulty of dynamically reverse by the runtime confusion. Path explosion directly affects the efficiency and accuracy of dynamic symbolic analysis. Because of the defect, this paper presents a novel algorithm DDD (Demand-Driven Dynamic Obfuscation Algorithm by using the demand-driven theory of symbolic analysis. First, create a large number of invalid paths to mislead the result of symbolic analysis. Second, according to the demand-driven theory, create a specific execution path to protect the security of software. The design and implementation of the algorithm is based on the current popular and mature SMT (satisfiability model theory, and the experimental effects are tested by Z3 - the SMT solver and Pex - the symbolic execution test tools. The experimental results prove that the algorithm enhance the security of the program.

  16. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Directory of Open Access Journals (Sweden)

    Tolga Güyer

    2015-04-01

    Full Text Available This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas, disorientation and string-matching. String-matching algorithms provide a more convenient disorientation measurement than other techniques, in that they examine the similarity between an optimal path and learners’ navigation paths. The algorithm particularly takes into account the contextual similarity between partly relevant web-pages in a user’s navigation path and pages in an optimal path. This study focuses on the reasons and the required steps to use this algorithm for disorientation measurement. Examples of actual student activities and learning environment data are provided to illustrate the process.

  17. Hybrid Collision Detection Algorithm based on Image Space

    Directory of Open Access Journals (Sweden)

    XueLi Shen

    2013-07-01

    Full Text Available Collision detection is an important application in the field of virtual reality, and efficiently completing collision detection has become the research focus. For the poorly real-time defect of collision detection, this paper has presented an algorithm based on the hybrid collision detection, detecting the potential collision object sets quickly with the mixed bounding volume hierarchy tree, and then using the streaming pattern collision detection algorithm to make an accurate detection. With the above methods, it can achieve the purpose of balancing load of the CPU and GPU and speeding up the detection rate. The experimental results show that compared with the classic Rapid algorithm, this algorithm can effectively improve the efficiency of collision detection.

  18. Medical Images Watermarking Algorithm Based on Improved DCT

    Directory of Open Access Journals (Sweden)

    Yv-fan SHANG

    2013-12-01

    Full Text Available Targeting at the incessant securities problems of digital information management system in modern medical system, this paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT. The algorithm first deploys the scrambling technology to encrypt the watermark information and then combines it with the visual feature vector of the image to generate a binary logic series through the hash function. The sequence as taken as keys and stored in the third party to obtain ownership of the original image. Having no need for artificial selection of a region of interest, no capacity constraint, no participation of the original medical image, such kind of watermark extracting solves security and speed problems in the watermark embedding and extracting. The simulation results also show that the algorithm is simple in operation and excellent in robustness and invisibility. In a word, it is more practical compared with other algorithms

  19. A robust DCT domain watermarking algorithm based on chaos system

    Science.gov (United States)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  20. Solution of optimal power flow using evolutionary-based algorithms

    African Journals Online (AJOL)

    This paper applies two reliable and efficient evolutionary-based methods named Shuffled Frog Leaping Algorithm ... Grey Wolf Optimizer (GWO) to solve Optimal Power Flow (OPF) problem. OPF is ..... The wolves search for the prey based on the alpha, beta, and delta positions. ..... Energy Conversion and Management, Vol.

  1. Knowledge Automatic Indexing Based on Concept Lexicon and Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG Lan-cheng; JIANG Dan; LE Jia-jin

    2005-01-01

    This paper is based on two existing theories about automatic indexing of thematic knowledge concept. The prohibit-word table with position information has been designed. The improved Maximum Matching-Minimum Backtracking method has been researched. Moreover it has been studied on improved indexing algorithm and application technology based on rules and thematic concept word table.

  2. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  3. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  4. TWO NEW FCT ALGORITHMS BASED ON PRODUCT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Guo Zhaoli; Shi Baochang; Wang Nengchao

    2001-01-01

    In this paper we present a product system and give arepresentation for consine functions with the sys tem. Based on the formula two new algorithms are designed for computing the Discrete Cosine Transform. Both algorithms have regular recursive structure and good numerical stability and are easy to parallize. CLC Number:O17 Document ID:AReferences:[1]Arguello,F. and Zapata,E. L. ,Fast Cosine Transform Based on the Successive Doubling Method,Electronics Lett.,26:19,1990,1616-1618.[2]Chan,S.C. and Ho,K.L. ,Direct Methods for Computing Discrete Sinusoidal Transform,IEE Proceedings,136: 6,1990,433- 442.[3]Chan,S.C. and Ho,K.L. ,A New Two-Dimensional Fast Cosine Transform Algorithm,IEEE Trans. Signal Processing,32:2,1991,481-485.[4]Cvetkovic,Z. and Popovic,M. V.,New Fast Recursive Algorithms for the Computation of Discrete Cosine and Sine Transforms,IEEE Trans. Signal Processing,40: 8,1992,2083-2086.[5]Hou,H.S.,A Fast Recursive Algorithm for Computing the Discrete Cosine Transform,IEEE Trans. ASSP-35:10,1987,1455-1461.[6]Lee,B.G. ,A New Algorithm to Compute the Discrete Cosine Transform,IEEE Trans. ASSP,Vol. ASSP-32:6,1984,1243-1245.[7]Lee,P. and Uang,F. Y.,Restructured Recursive DCT and DST Algorithms,IEEE Trans.Signal Processing,42: 7,1994,1600- 1609.[8]Yun,D. and Lee,S.U. ,On the Fixed-Point Error Analysis of Several Fast IDCT Algorithms,IEEE Trans. Circuits and Systems- I : Analog and Digital Signal Processing,42 : 11,1995,686- 692.Manuscript Received:2000年2月20日Published:2001年9月1日

  5. Relevance Feedback Algorithm Based on Collaborative Filtering in Image Retrieval

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2010-12-01

    Full Text Available Content-based image retrieval is a very dynamic study field, and in this field, how to improve retrieval speed and retrieval accuracy is a hot issue. The retrieval performance can be improved when applying relevance feedback to image retrieval and introducing the participation of people to the retrieval process. However, as for many existing image retrieval methods, there are disadvantages of relevance feedback with information not being fully saved and used, and their accuracy and flexibility are relatively poor. Based on this, the collaborative filtering technology was combined with relevance feedback in this study, and an improved relevance feedback algorithm based on collaborative filtering was proposed. In the method, the collaborative filtering technology was used not only to predict the semantic relevance between images in database and the retrieval samples, but to analyze feedback log files in image retrieval, which can make the historical data of relevance feedback be fully used by image retrieval system, and further to improve the efficiency of feedback. The improved algorithm presented has been tested on the content-based image retrieval database, and the performance of the algorithm has been analyzed and compared with the existing algorithms. The experimental results showed that, compared with the traditional feedback algorithms, this method can obviously improve the efficiency of relevance feedback, and effectively promote the recall and precision of image retrieval.

  6. AR-based Algorithms for Short Term Load Forecast

    Directory of Open Access Journals (Sweden)

    Zuhairi Baharudin

    2014-02-01

    Full Text Available Short-term load forecast plays an important role in planning and operation of power systems. The accuracy of the forecast value is necessary for economically efficient operation and effective control of the plant. This study describes the methods of Autoregressive (AR Burg’s and Modified Covariance (MCOV in solving the short term load forecast. Both algorithms are tested with power load data from Malaysian grid and New South Wales, Australia. The forecast accuracy is assessed in terms of their errors. For the comparison the algorithms are tested and benchmark with the previous successful proposed methods.

  7. Designers' Cognitive Thinking Based on Evolutionary Algorithms

    OpenAIRE

    Zhang Shutao; Jianning Su; Chibing Hu; Peng Wang

    2013-01-01

    The research on cognitive thinking is important to construct the efficient intelligent design systems. But it is difficult to describe the model of cognitive thinking with reasonable mathematical theory. Based on the analysis of design strategy and innovative thinking, we investigated the design cognitive thinking model that included the external guide thinking of "width priority - depth priority" and the internal dominated thinking of "divergent thinking - convergent thinking", built a reaso...

  8. A Rule-based Track Anomaly Detection Algorithm for Maritime Force Protection

    Science.gov (United States)

    2014-08-01

    likely to perform better with AIS data than with primary radar data. Rule-based algorithms are transparent , easy to use, and use less computation...8039. 6. Kadar I., “ Perceptual Reasoning Managed Situation Assessment for harbour protection”, Springer Science 2009. 7. Jasenvicius R

  9. Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm

    OpenAIRE

    Dong Yumin; Zhao Li

    2014-01-01

    Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...

  10. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  11. Temporal-based needle segmentation algorithm for transrectal ultrasound prostate biopsy procedures.

    Science.gov (United States)

    Cool, Derek W; Gardi, Lori; Romagnoli, Cesare; Saikaly, Manale; Izawa, Jonathan I; Fenster, Aaron

    2010-04-01

    Automatic identification of the biopsy-core tissue location during a prostate biopsy procedure would provide verification that targets were adequately sampled and would allow for appropriate intraprocedure biopsy target modification. Localization of the biopsy core requires accurate segmentation of the biopsy needle and needle tip from transrectal ultrasound (TRUS) biopsy images. A temporal-based TRUS needle segmentation algorithm was developed specifically for the prostate biopsy procedure to automatically identify the TRUS image containing the biopsy needle from a collection of 2D TRUS images and to segment the biopsy-core location from the 2D TRUS image. The temporal-based segmentation algorithm performs a temporal analysis on a series of biopsy TRUS images collected throughout needle insertion and withdrawal. Following the identification of points of needle insertion and retraction, the needle axis is segmented using a Hough transform-based algorithm, which is followed by a temporospectral TRUS analysis to identify the biopsy-needle tip. Validation of the temporal-based algorithm is performed on 108 TRUS biopsy sequences collected from the procedures of ten patients. The success of the temporal search to identify the proper images was manually assessed, while the accuracies of the needle-axis and needle-tip segmentations were quantitatively compared to implementations of two other needle segmentation algorithms within the literature. The needle segmentation algorithm demonstrated a >99% accuracy in identifying the TRUS image at the moment of needle insertion from the collection of real-time TRUS images throughout the insertion and withdrawal of the biopsy needle. The segmented biopsy-needle axes were accurate to within 2.3 +/- 2.0 degrees and 0.48 +/- 0.42 mm of the gold standard. Identification of the needle tip to within half of the biopsy-core length (<10 mm) was 95% successful with a mean error of 2.4 +/- 4.0 mm. Needle-tip detection using the temporal-based

  12. Plagiarism Detection Based on SCAM Algorithm

    DEFF Research Database (Denmark)

    Anzelmi, Daniele; Carlone, Domenico; Rizzello, Fabio

    2011-01-01

    Plagiarism is a complex problem and considered one of the biggest in publishing of scientific, engineering and other types of documents. Plagiarism has also increased with the widespread use of the Internet as large amount of digital data is available. Plagiarism is not just direct copy but also...... paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. Plagiarism detection techniques are applied by making a distinction between natural and programming languages. Our proposed detection process is based on natural language...... document. Our plagiarism detection system, like many Information Retrieval systems, is evaluated with metrics of precision and recall....

  13. Plagiarism Detection Based on SCAM Algorithm

    DEFF Research Database (Denmark)

    Anzelmi, Daniele; Carlone, Domenico; Rizzello, Fabio

    2011-01-01

    Plagiarism is a complex problem and considered one of the biggest in publishing of scientific, engineering and other types of documents. Plagiarism has also increased with the widespread use of the Internet as large amount of digital data is available. Plagiarism is not just direct copy but also...... paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. Plagiarism detection techniques are applied by making a distinction between natural and programming languages. Our proposed detection process is based on natural language...

  14. ASSESSMENT OF RELIABILITY AND COMPARISON OF TWO ALGORITHMS FOR HAIL HAZARD DETECTION FROM AIRCRAFT

    Directory of Open Access Journals (Sweden)

    I.M. Braun

    2005-02-01

    Full Text Available  This paper presents and analyzes two algorithms for the detection of hail zones in clouds and precipitation: parametric algorithm and adaptive non-parametric algorithm. Reliability of detection of radar signals from hailstones is investigated by statistical simulation with application of experimental researches as initial data. The results demonstrate the limits of both algorithms as well as higher viability of non-parametric algorithm. Polarimetric algorithms are useful for the implementation in ground-based and airborne weather radars.

  15. Novel Frequency Hopping Sequences Generator Based on AES Algorithm

    Institute of Scientific and Technical Information of China (English)

    李振荣; 庄奕琪; 张博; 张超

    2010-01-01

    A novel frequency hopping(FH) sequences generator based on advanced encryption standard(AES) iterated block cipher is proposed for FH communication systems.The analysis shows that the FH sequences based on AES algorithm have good performance in uniformity, correlation, complexity and security.A high-speed, low-power and low-cost ASIC of FH sequences generator is implemented by optimizing the structure of S-Box and MixColumns of AES algorithm, proposing a hierarchical power management strategy, and applying ...

  16. Optimal design of steel portal frames based on genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yue CHEN; Kai HU

    2008-01-01

    As for the optimal design of steel portal frames, due to both the complexity of cross selections of beams and columns and the discreteness of design variables, it is difficult to obtain satisfactory results by traditional optimization. Based on a set of constraints of the Technical Specification for Light-weighted Steel Portal Frames of China, a genetic algorithm (GA) optimization program for portal frames, written in MATLAB code, was proposed in this paper. The graph user interface (GUI) is also developed for this optimal program, so that it can be used much more conveniently. Finally, some examples illustrate the effectiveness and efficiency of the genetic-algorithm-based optimal program.

  17. A novel bit-quad-based Euler number computing algorithm

    OpenAIRE

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Xiao ZHAO

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results ...

  18. Recommendations for performance assessment of automatic sleep staging algorithms.

    Science.gov (United States)

    Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2014-01-01

    A number of automatic sleep scoring algorithms have been published in the last few years. These can potentially help save time and reduce costs in sleep monitoring. However, the use of both R&K and AASM classification, different databases and varying performance metrics makes it extremely difficult to compare these algorithms. In this paper, we describe some readily available polysomnography databases and propose a set of recommendations and performance metrics to promote uniform testing and direct comparison of different algorithms. We use two different polysomnography databases with a simple sleep staging algorithm to demonstrate the usage of all recommendations and presentation of performance results. We also illustrate how seemingly similar results using two different databases can have contrasting accuracies in different sleep stages. Finally, we show how selection of different training and test subjects from the same database can alter the final performance results.

  19. Half-global discretization algorithm based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    Tan Xu; Chen Yingwu

    2009-01-01

    It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithms for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.

  20. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    Directory of Open Access Journals (Sweden)

    Zhiming Song

    2015-01-01

    Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  1. A novel multiobjective evolutionary algorithm based on regression analysis.

    Science.gov (United States)

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m - 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m - 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  2. Cosine-Based Clustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Mohammed A. H. Lubbad

    2012-02-01

    Full Text Available Due to many applications need the management of spatial data; clustering large spatial databases is an important problem which tries to find the densely populated regions in the feature space to be used in data mining, knowledge discovery, or efficient information retrieval. A good clustering approach should be efficient and detect clusters of arbitrary shapes. It must be insensitive to the outliers (noise and the order of input data. In this paper Cosine Cluster is proposed based on cosine transformation, which satisfies all the above requirements. Using multi-resolution property of cosine transforms, arbitrary shape clusters can be effectively identified at different degrees of accuracy. Cosine Cluster is also approved to be highly efficient in terms of time complexity. Experimental results on very large data sets are presented, which show the efficiency and effectiveness of the proposed approach compared to other recent clustering methods.

  3. Treatment Algorithms Based on Tumor Molecular Profiling: The Essence of Precision Medicine Trials.

    Science.gov (United States)

    Le Tourneau, Christophe; Kamal, Maud; Tsimberidou, Apostolia-Maria; Bedard, Philippe; Pierron, Gaëlle; Callens, Céline; Rouleau, Etienne; Vincent-Salomon, Anne; Servant, Nicolas; Alt, Marie; Rouzier, Roman; Paoletti, Xavier; Delattre, Olivier; Bièche, Ivan

    2016-04-01

    With the advent of high-throughput molecular technologies, several precision medicine (PM) studies are currently ongoing that include molecular screening programs and PM clinical trials. Molecular profiling programs establish the molecular profile of patients' tumors with the aim to guide therapy based on identified molecular alterations. The aim of prospective PM clinical trials is to assess the clinical utility of tumor molecular profiling and to determine whether treatment selection based on molecular alterations produces superior outcomes compared with unselected treatment. These trials use treatment algorithms to assign patients to specific targeted therapies based on tumor molecular alterations. These algorithms should be governed by fixed rules to ensure standardization and reproducibility. Here, we summarize key molecular, biological, and technical criteria that, in our view, should be addressed when establishing treatment algorithms based on tumor molecular profiling for PM trials. © The Author 2015. Published by Oxford University Press.

  4. Improved motion information-based infrared dim target tracking algorithms

    Science.gov (United States)

    Lei, Liu; Zhijian, Huang

    2014-11-01

    Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.

  5. A Metaheuristic Algorithm Based on Chemotherapy Science: CSA

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Salmani

    2017-01-01

    Full Text Available Among scientific fields of study, mathematical programming has high status and its importance has led researchers to develop accurate models and effective solving approaches to addressing optimization problems. In particular, metaheuristic algorithms are approximate methods for solving optimization problems whereby good (not necessarily optimum solutions can be generated via their implementation. In this study, we propose a population-based metaheuristic algorithm according to chemotherapy method to cure cancers that mainly search the infeasible region. As in chemotherapy, Chemotherapy Science Algorithm (CSA tries to kill inappropriate solutions (cancers and bad cells of the human body; however, this would inevitably risk incidentally destroying some acceptable solutions (healthy cells. In addition, as the cycle of cancer treatment repeats over and over, the algorithm is iterated. To align chemotherapy process with the proposed algorithm, different basic terms and definitions including Infeasibility Function (IF, objective function (OF, Cell Area (CA, and Random Cells (RCs are presented in this study. In the terminology of algorithms and optimization, IF and OF are mainly applicable as criteria to compare every pair of generated solutions. Finally, we test CSA and its structure using the benchmark Traveling Salesman Problem (TSP.

  6. A Global Path Planning Algorithm Based on Bidirectional SVGA

    Directory of Open Access Journals (Sweden)

    Taizhi Lv

    2017-01-01

    Full Text Available For path planning algorithms based on visibility graph, constructing a visibility graph is very time-consuming. To reduce the computing time of visibility graph construction, this paper proposes a novel global path planning algorithm, bidirectional SVGA (simultaneous visibility graph construction and path optimization by A⁎. This algorithm does not construct a visibility graph before the path optimization. However it constructs a visibility graph and searches for an optimal path at the same time. At each step, a node with the lowest estimation cost is selected to be expanded. According to the status of this node, different through lines are drawn. If this line is free-collision, it is added to the visibility graph. If not, some vertices of obstacles which are passed through by this line are added to the OPEN list for expansion. In the SVGA process, only a few visible edges which are in relation to the optimal path are drawn and the most visible edges are ignored. For taking advantage of multicore processors, this algorithm performs SVGA in parallel from both directions. By SVGA and parallel performance, this algorithm reduces the computing time and space. Simulation experiment results in different environments show that the proposed algorithm improves the time and space efficiency of path planning.

  7. Texture orientation-based algorithm for detecting infrared maritime targets.

    Science.gov (United States)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  8. A Scheduling Algorithm Based on Petri Nets and Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Rachida H. Ghoul

    2007-01-01

    Full Text Available This study aims at presenting a hybrid Flexible Manufacturing System "HFMS" short-term scheduling problem. Based on the art state of general scheduling algorithms, we present the meta-heuristic, we have decided to apply for a given example of HFMS. That was the study of Simulated Annealing Algorithm SA. The HFMS model based on hierarchical Petri nets, was used to represent static and dynamic behavior of the HFMS and design scheduling solutions. Hierarchical Petri nets model was regarded as being made up a set of single timed colored Petri nets models. Each single model represents one process which was composed of many operations and tasks. The complex scheduling problem was decomposed in simple sub-problems. Scheduling algorithm was applied on each sub model in order to resolve conflicts on shared production resources.

  9. The PCNN adaptive segmentation algorithm based on visual perception

    Science.gov (United States)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  10. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  11. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  12. Genetic algorithm-based evaluation of spatial straightness error

    Institute of Scientific and Technical Information of China (English)

    崔长彩; 车仁生; 黄庆成; 叶东; 陈刚

    2003-01-01

    A genetic algorithm ( GA ) -based approach is proposed to evaluate the straightness error of spatial lines. According to the mathematical definition of spatial straightness, a verification model is established for straightness error, and the fitness function of GA is then given and the implementation techniques of the proposed algorithm is discussed in detail. The implementation techniques include real number encoding, adaptive variable range choosing, roulette wheel and elitist combination selection strategies, heuristic crossover and single point mutation schemes etc. An application example is quoted to validate the proposed algorithm. The computation result shows that the GA-based approach is a superior nonlinear parallel optimization method. The performance of the evolution population can be improved through genetic operations such as reproduction, crossover and mutation until the optimum goal of the minimum zone solution is obtained. The quality of the solution is better and the efficiency of computation is higher than other methods.

  13. A structural comparison of measurement-based admission control algorithms

    Institute of Scientific and Technical Information of China (English)

    GU Yi-ran; WANG Suo-ping; WU Hai-ya

    2006-01-01

    Measurement-based admission control (MBAC)algorithm is designed for the relaxed real-time service. In contrast to traditional connection admission control mechanisms,the most attractive feature of MBAC algorithm is that it does not require a prior traffic model and that is very difficult for the user to come up with a tight traffic model before establishing a flow.Other advantages of MBAC include that it can achieve higher network utilization and offer quality service to users. In this article, the study of the equations in the MBAC shows that they can all be expressed in the same form. Based on the same form,some MBAC algorithms can achieve same performance only if they satisfy some conditions.

  14. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  15. An Algorithm on Generating Lattice Based on Layered Concept Lattice

    Directory of Open Access Journals (Sweden)

    Zhang Chang-sheng

    2013-08-01

    Full Text Available Concept lattice is an effective tool for data analysis and rule extraction, a bottleneck factor on impacting the applications of concept lattice is how to generate lattice efficiently. In this paper, an algorithm LCLG on generating lattice in batch processing based on layered concept lattice is developed, this algorithm is based on layered concept lattice, the lattice is generated downward layer by layer through concept nodes and provisional nodes in current layer; the concept nodes are found parent-child relationships upward layer by layer, then the Hasse diagram of inter-layer connection is generated; in the generated process of the lattice nodes in each layer, we do the pruning operations dynamically according to relevant properties, and delete some unnecessary nodes, such that the generating speed is improved greatly; the experimental results demonstrate that the proposed algorithm has good performance.

  16. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  17. Time-Based Dynamic Trust Model Using Ant Colony Algorithm

    Institute of Scientific and Technical Information of China (English)

    TANG Zhuo; LU Zhengding; LI Kai

    2006-01-01

    The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts'change that is aroused by the time' lapse and the inter-operation through an instance.

  18. LAHS: A novel harmony search algorithm based on learning automata

    Science.gov (United States)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  19. Effective ANT based Routing Algorithm for Data Replication in MANETs

    Directory of Open Access Journals (Sweden)

    N.J. Nithya Nandhini

    2013-12-01

    Full Text Available In mobile ad hoc network, the nodes often move and keep on change its topology. Data packets can be forwarded from one node to another on demand. To increase the data accessibility data are replicated at nodes and made as sharable to other nodes. Assuming that all mobile host cooperative to share their memory and allow forwarding the data packets. But in reality, all nodes do not share the resources for the benefits of others. These nodes may act selfishly to share memory and to forward the data packets. This paper focuses on selfishness of mobile nodes in replica allocation and routing protocol based on Ant colony algorithm to improve the efficiency. The Ant colony algorithm is used to reduce the overhead in the mobile network, so that it is more efficient to access the data than with other routing protocols. This result shows the efficiency of ant based routing algorithm in the replication allocation.

  20. Onboard assessment of XRF spectra using genetic algorithms for decision making on an autonomous underwater vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Breen, Jeremy [Tasmanian Information and Communication Technologies Centre, Commonwealth Scientific and Industrial Research Organisation, GPO Box 1538, Hobart, TAS (Australia); School of Computing and Information Systems, University of Tasmania, Hobart, TAS (Australia); Souza, P. de, E-mail: paulo.desouza@csiro.au [Tasmanian Information and Communication Technologies Centre, Commonwealth Scientific and Industrial Research Organisation, GPO Box 1538, Hobart, TAS (Australia); Timms, G.P. [Tasmanian Information and Communication Technologies Centre, Commonwealth Scientific and Industrial Research Organisation, GPO Box 1538, Hobart, TAS (Australia); Ollington, R. [School of Computing and Information Systems, University of Tasmania, Hobart, TAS (Australia)

    2011-06-15

    In order to optimise use of the limited resources (time, power) of an autonomous underwater vehicle (AUV) with a miniaturised X-ray fluorescence (XRF) spectrometer on board to carry out in situ autonomous chemical mapping of the surface of sediments with desired resolution, a genetic algorithm for rapid curve fitting is reported in this paper. This method quickly converges and provides an accurate in situ assessment of metals present, which helps the control system of the AUV to decide on future sampling locations. More thorough analysis of the available data could be performed once the AUV has returned to the base (laboratory).

  1. An Initiative-Learning Algorithm Based on System Uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jun

    2005-01-01

    Initiative-learning algorithms are characterized by and hence advantageous for their independence of prior domain knowledge.Usually,their induced results could more objectively express the potential characteristics and patterns of information systems.Initiative-learning processes can be effectively conducted by system uncertainty,because uncertainty is an intrinsic common feature of and also an essential link between information systems and their induced results.Obviously,the effectiveness of such initiative-learning framework is heavily dependent on the accuracy of system uncertainty measurements.Herein,a more reasonable method for measuring system uncertainty is developed based on rough set theory and the conception of information entropy;then a new algorithm is developed on the bases of the new system uncertainty measurement and the Skowron's algorithm for mining propositional default decision rules.The proposed algorithm is typically initiative-learning.It is well adaptable to system uncertainty.As shown by simulation experiments,its comprehensive performances are much better than those of congeneric algorithms.

  2. Interchanges Safety: Forecast Model Based on ISAT Algorithm

    Directory of Open Access Journals (Sweden)

    Sascia Canale

    2013-09-01

    Full Text Available The ISAT algorithm (Interchange Safety Analysis Tool, developed by the Federal Highway Administration (FHWA, provides design and safety engineers with an automated tool for assessing the safety effects of geometric design and traffic control features at an existing interchange and adjacent roadway network. Concerning the default calibration coefficients and crash distributions by severity and type, the user should modify these default values to more accurately reflect the safety experience of their local/State agency prior to using ISAT to perform actual safety assessments. This paper will present the calibration process of the FHWA algorithm to the local situation of Oriental Sicily. The aim is to realize an instrument for accident forecast analyses, useful to Highway Managers, in order to individuate those infrastructural elements that can contribute to improve the safety level of interchange areas, if suitably calibrated.

  3. An Efficient Multi-path Routing Algorithm Based on Hybrid Firefly Algorithm for Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    K. Kumaravel

    2015-05-01

    Full Text Available Wireless Mesh Network (WMN uses the latest technology which helps in providing end users a high quality service referred to as the Internet’s “last mile”. Also considering WMN one of the most important technologies that are employed is multicast communication. Among the several issues routing which is significantly an important issue is addressed by every WMN technologies and this is done during the process of data transmission. The IEEE 802.11s Standard entails and sets procedures which need to be followed to facilitate interconnection and thus be able to devise an appropriate WMN. There has been introduction of several protocols by many authors which are mainly devised on the basis of machine learning and artificial intelligence. Multi-path routing may be considered as one such routing method which facilitates transmission of data over several paths, proving its capabilities as a useful strategy for achieving reliability in WMN. Though, multi-path routing in any manner cannot really guarantee deterministic transmission. As here there are multiple paths available for enabling data transmission from source to destination node. The algorithm that had been employed before in the studies conducted did not take in to consideration routing metrics which include energy aware metrics that are used for path selection during transferring of data. The following study proposes use of the hybrid multipath routing algorithm while taking in to consideration routing metrics which include energy, minimal loss for efficient path selection and transferring of data. Proposed algorithm here has two phases. In the first phase prim’s algorithm has been proposed so that in networks route discovery may be possible. For the second one the Hybrid firefly algorithm which is based on harmony search has been employed for selection of the most suitable and best through proper analysis of metrics which include energy awareness and minimal loss for every path that has

  4. A Lex-BFS-based recognition algorithm for Robinsonian matrices

    NARCIS (Netherlands)

    Laurent, M.; Seminaroti, M.; Paschos, V.; Widmayer, P.

    2015-01-01

    Robinsonian matrices arise in the classical seriation problem and play an important role in many applications where unsorted similarity (or dissimilarity) information must be re- ordered. We present a new polynomial time algorithm to recognize Robinsonian matrices based on a new characterization of

  5. A Lex-BFS-based recognition algorithm for Robinsonian matrices

    NARCIS (Netherlands)

    M. Laurent (Monique); M. Seminaroti (Matteo); V. Paschos; P. Widmayer

    2015-01-01

    htmlabstractRobinsonian matrices arise in the classical seriation problem and play an important role in many applications where unsorted similarity (or dissimilarity) information must be re- ordered. We present a new polynomial time algorithm to recognize Robinsonian matrices based on a new characte

  6. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  7. A Lex-BFS-based recognition algorithm for Robinsonian matrices

    NARCIS (Netherlands)

    M. Laurent (Monique); M. Seminaroti (Matteo); V. Paschos; P. Widmayer

    2015-01-01

    htmlabstractRobinsonian matrices arise in the classical seriation problem and play an important role in many applications where unsorted similarity (or dissimilarity) information must be re- ordered. We present a new polynomial time algorithm to recognize Robinsonian matrices based on a new

  8. Competition assignment problem algorithm based on Hungarian method

    Institute of Scientific and Technical Information of China (English)

    KONG Chao; REN Yongtai; GE Huiling; DENG Hualing

    2007-01-01

    Traditional Hungarian method can only solve standard assignment problems, while can not solve competition assignment problems. This article emphatically discussed the difference between standard assignment problems and competition assignment problems. The kinds of competition assignment problem algorithms based on Hungarian method and the solutions of them were studied.

  9. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu

    2011-01-01

    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.

  10. An Efficient 16-Bit Multiplier based on Booth Algorithm

    Science.gov (United States)

    Khan, M. Zamin Ali; Saleem, Hussain; Afzal, Shiraz; Naseem, Jawed

    2012-11-01

    Multipliers are key components of many high performance systems such as microprocessors, digital signal processors, etc. Optimizing the speed and area of the multiplier is major design issue which is usually conflicting constraint so that improving speed results mostly in bigger areas. A VHDL designed architecture based on booth multiplication algorithm is proposed which not only optimize speed but also efficient on energy use.

  11. Reducing Ultrasonic Signal Noise by Algorithms based on Wavelet Thresholding

    Directory of Open Access Journals (Sweden)

    M. Kreidl

    2002-01-01

    Full Text Available Traditional techniques for reducing ultrasonic signal noise are based on the optimum frequency of an acoustic wave, ultrasonic probe construction and low-noise electronic circuits. This paper describes signal processing methods for noise suppression using a wavelet transform. Computer simulations of the proposed testing algorithms are presented.

  12. A Table Based Algorithm for Minimum Directed Spanning Trees

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    As far as the weighted digraph is considered, an optimal directed spanning tree algorithm called table basedalgorithm (TBA) ia proposed in the paper based on the table instead of the weighted digraph. The optimality is proved,and a numerical example is demonatrated.

  13. A CT Image Segmentation Algorithm Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    QU Jing-yi; SHI Hao-shan

    2006-01-01

    Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.

  14. Risk-assessment algorithm and recommendations for venous thromboembolism prophylaxis in medical patients

    Directory of Open Access Journals (Sweden)

    Ana T Rocha

    2007-09-01

    Full Text Available Ana T Rocha1, Edison F Paiva2, Arnaldo Lichtenstein2, Rodolfo Milani Jr2, Cyrillo Cavalheiro-Filho3, Francisco H Maffei41Hospital Universitario Professor Edgard Santos da Universidade Federal da Bahia, Salvador, Bahia, Brazil; 2Hospital das Clinicas da Faculdade de Medicina da Universidade de Sao Paulo, Sao Paulo, Brazil; 3Instituto do Coracao do Hospital das Clinicas da Faculdade de Medicina da Universidade de Sao Paulo, Sao Paulo, Brazil; 4Faculdade de Medicina de Botucatu, Botucatu, Sao Paulo, BrazilAbstract: The risk for venous thromboembolism (VTE in medical patients is high, but risk assessment is rarely performed because there is not yet a good method to identify candidates for prophylaxis.Purpose: To perform a systematic review about VTE risk factors (RFs in hospitalized medical patients and generate recommendations (RECs for prophylaxis that can be implemented into practice.Data sources: A multidisciplinary group of experts from 12 Brazilian Medical Societies searched MEDLINE, Cochrane, and LILACS.Study selection: Two experts independently classified the evidence for each RF by its scientific quality in a standardized manner. A risk-assessment algorithm was created based on the results of the review.Data synthesis: Several VTE RFs have enough evidence to support RECs for prophylaxis in hospitalized medical patients (eg, increasing age, heart failure, and stroke. Other factors are considered adjuncts of risk (eg, varices, obesity, and infections. According to the algorithm, hospitalized medical patients ≥40 years-old with decreased mobility, and ≥1 RFs should receive chemoprophylaxis with heparin, provided they don’t have contraindications. High prophylactic doses of unfractionated heparin or low-molecular-weight-heparin must be administered and maintained for 6–14 days.Conclusions: A multidisciplinary group generated evidence-based RECs and an easy-to-use algorithm to facilitate VTE prophylaxis in medical patients

  15. Developing NASA's VIIRS LST and Emissivity EDRs using a physics based Temperature Emissivity Separation (TES) algorithm

    Science.gov (United States)

    Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.

    2015-12-01

    Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.

  16. The Patch-Levy-Based Bees Algorithm Applied to Dynamic Optimization Problems

    Directory of Open Access Journals (Sweden)

    Wasim A. Hussein

    2017-01-01

    Full Text Available Many real-world optimization problems are actually of dynamic nature. These problems change over time in terms of the objective function, decision variables, constraints, and so forth. Therefore, it is very important to study the performance of a metaheuristic algorithm in dynamic environments to assess the robustness of the algorithm to deal with real-word problems. In addition, it is important to adapt the existing metaheuristic algorithms to perform well in dynamic environments. This paper investigates a recently proposed version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA, on solving dynamic problems, and adapts it to deal with such problems. The performance of the PLBA is compared with other BA versions and other state-of-the-art algorithms on a set of dynamic multimodal benchmark problems of different degrees of difficulties. The results of the experiments show that PLBA achieves better results than the other BA variants. The obtained results also indicate that PLBA significantly outperforms some of the other state-of-the-art algorithms and is competitive with others.

  17. ITO-based evolutionary algorithm to solve traveling salesman problem

    Science.gov (United States)

    Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei

    2014-03-01

    In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.

  18. Application layer multicast routing solution based on genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Peng CHENG; Qiufeng WU; Qionghai DAI

    2009-01-01

    Application layer multicast routing is a multi-objective optimization problem.Three routing con-straints,tree's cost,tree's balance and network layer load distribution are analyzed in this paper.The three fitness functions are used to evaluate a multicast tree on the three indexes respectively and one general fitness function is generated.A novel approach based on genetic algorithms is proposed.Numerical simulations show that,compared with geometrical routing rules,the proposed algorithm improve all three indexes,especially on cost and network layer load distribution indexes.

  19. Digital Watermarking Algorithm Based on Wavelet Transform and Neural Network

    Institute of Scientific and Technical Information of China (English)

    WANG Zhenfei; ZHAI Guangqun; WANG Nengchao

    2006-01-01

    An effective blind digital watermarking algorithm based on neural networks in the wavelet domain is presented. Firstly, the host image is decomposed through wavelet transform. The significant coefficients of wavelet are selected according to the human visual system (HVS) characteristics. Watermark bits are added to them. And then effectively cooperates neural networks to learn the characteristics of the embedded watermark related to them. Because of the learning and adaptive capabilities of neural networks, the trained neural networks almost exactly recover the watermark from the watermarked image. Experimental results and comparisons with other techniques prove the effectiveness of the new algorithm.

  20. Image fusion based on expectation maximization algorithm and steerable pyramid

    Institute of Scientific and Technical Information of China (English)

    Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung

    2004-01-01

    In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.

  1. Community Structure Detection Algorithm Based on the Node Belonging Degree

    Directory of Open Access Journals (Sweden)

    Jian Li

    2013-07-01

    Full Text Available In this paper, we propose a novel algorithm to identify communities in complex networks based on the node belonging degree. First, we give the concept of the node belonging degree, and then determine whether a node belongs to a community or not according to the belonging degree of the node with respect to the community. The experiment results of three real-world networks: a network with three communities with 19 nodes, Zachary Karate Club and network of American college football teams show that the proposed algorithm has satisfactory community structure detection.  

  2. Matrix-based, finite-difference algorithms for computational acoustics

    Science.gov (United States)

    Davis, Sanford

    1990-01-01

    A compact numerical algorithm is introduced for simulating multidimensional acoustic waves. The algorithm is expressed in terms of a set of matrix coefficients on a three-point spatial grid that approximates the acoustic wave equation with a discretization error of O(h exp 5). The method is based on tracking a local phase variable and its implementation suggests a convenient coordinate splitting along with natural intermediate boundary conditions. Results are presented for oblique plane waves and compared with other procedures. Preliminary computations of acoustic diffraction are also considered.

  3. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  4. QRS Detection Based on an Advanced Multilevel Algorithm

    Directory of Open Access Journals (Sweden)

    Wissam Jenkal

    2016-01-01

    Full Text Available This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system for a real time ECG monitoring system.

  5. An Optimal Seed Based Compression Algorithm for DNA Sequences

    Directory of Open Access Journals (Sweden)

    Pamela Vinitha Eric

    2016-01-01

    Full Text Available This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms.

  6. A Sumudu based algorithm for solving differential equations

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2007-11-01

    Full Text Available An algorithm based on Sumudu transform is developed. The algorithm can be implemented in computer algebra systems like Maple. It can be used to solve differential equations of the following form automatically without human interaction \\begin{displaymath} \\sum_{i=0}^{m} p_i(xy^{(i}(x = \\sum_{j=0}^{k}q_j(xh_j(x \\end{displaymath} where pi(x(i=0, 1, 2, ..., m and qj(x(j=0, 1, 2, ..., k are polynomials. hj(x are non-rational functions, but their Sumudu transforms are rational. m, k are nonnegative integers.

  7. Analog Group Delay Equalizers Design Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    M. Laipert

    2006-04-01

    Full Text Available This paper deals with a design method of the analog all-pass filter designated for equalization of the group delay frequency response of the analog filter. This method is based on usage of evolutionary algorithm, the Differential Evolution algorithm in particular. We are able to design such equalizers to be obtained equal-ripple group delay frequency response in the pass-band of the low-pass filter. The procedure works automatically without an input estimation. The method is presented on solving practical examples.

  8. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  9. A New Algorithm for Total Variation Based Image Denoising

    Institute of Scientific and Technical Information of China (English)

    Yi-ping XU

    2012-01-01

    We propose a new algorithm for the total variation based on image denoising problem.The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration.An algebraic multi-grid method is applied to solve the linear system in the inner iteration.Furthermore,Krylov subspace acceleration is adopted to improve convergence in the outer iteration.Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.

  10. Algorithms for Quantum Branching Programs Based on Fingerprinting

    CERN Document Server

    Ablayev, Farid; 10.4204/EPTCS.9.1

    2009-01-01

    In the paper we develop a method for constructing quantum algorithms for computing Boolean functions by quantum ordered read-once branching programs (quantum OBDDs). Our method is based on fingerprinting technique and representation of Boolean functions by their characteristic polynomials. We use circuit notation for branching programs for desired algorithms presentation. For several known functions our approach provides optimal QOBDDs. Namely we consider such functions as Equality, Palindrome, and Permutation Matrix Test. We also propose a generalization of our method and apply it to the Boolean variant of the Hidden Subgroup Problem.

  11. Stellar Population Analysis of Galaxies based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    Abdel-Fattah Attia; H.A.Ismail; I.M.Selim; A.M.Osman; I.A.Isaa; M.A.Marie; A.A.Shaker

    2005-01-01

    We present a new method for determining the age and relative contribution of different stellar populations in galaxies based on the genetic algorithm.We apply this method to the barred spiral galaxy NGC 3384, using CCD images in U, B, V, R and I bands. This analysis indicates that the galaxy NGC 3384 is mainly inhabited by old stellar population (age > 109 yr). Some problems were encountered when numerical simulations are used for determining the contribution of different stellar populations in the integrated color of a galaxy. The results show that the proposed genetic algorithm can search efficiently through the very large space of the possible ages.

  12. Web mining based on chaotic social evolutionary programming algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With an aim to the fact that the K-means clustering algorithm usually ends in local optimization and is hard to harvest global optimization, a new web clustering method is presented based on the chaotic social evolutionary programming (CSEP) algorithm. This method brings up the manner of that a cognitive agent inherits a paradigm in clustering to enable the cognitive agent to acquire a chaotic mutation operator in the betrayal. As proven in the experiment, this method can not only effectively increase web clustering efficiency, but it can also practically improve the precision of web clustering.

  13. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  14. Design and Implementation of GPU-Based Prim's Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2011-07-01

    Full Text Available Minimum spanning tree is a classical problem in graph theory that plays a key role in a broad domain of applications. This paper proposes a minimum spanning tree algorithm using Prim's approach on Nvidia GPU under CUDA architecture. By using new developed GPU-based Min-Reduction data parallel primitive in the key step of the algorithm, higher efficiency is achieved. Experimental results show that we obtain about 2 times speedup on Nvidia GTX260 GPU over the CPU implementation and 3 times speedup over non-primitives GPU implementation.

  15. CCH-based geometric algorithms for SVM and applications

    Institute of Scientific and Technical Information of China (English)

    Xin-jun PENG; Yi-fei WANG

    2009-01-01

    The support vector machine (SVM) is a novel machine learning tool in data mining. In this paper, the geometric approach based on the compressed convex hull (CCH) with a mathematical framework is introduced to solve SVM classification problems. Compared with the reduced convex hull (RCH), CCH preserves the shape of geometric solids for data sets; meanwhile, it is easy to give the necessary and sufficient condition for determining its extreme points. As practical applications of CCH, spare and probabilistic speed-up geometric algorithms are developed. Results of numerical experiments show that the proposed algorithms can reduce kernel calculations and display nice performances.

  16. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  17. Optimizing Combination of Units Commitment Based on Improved Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    LAI Yifei; ZHANG Qianhua; JIA Junping

    2007-01-01

    GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms, such as natural selection, genetic recombination and survival of the fittest. By use of coding betterment, the dynamic changes of the mutation rate and the crossover probability, the dynamic choice of subsistence, the reservation of the optimal fitness value, a modified genetic algorithm for optimizing combination of units in thermal power plants is proposed.And through taking examples, test result are analyzed and compared with results of some different algorithms. Numerical results show available value for the unit commitment problem with examples.

  18. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  19. An assessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram.

    Science.gov (United States)

    Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J

    2016-04-01

    Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of  -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and  -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of  -5.6 to 5.2 bpm and a bias of  -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.

  20. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  1. An ellipse detection algorithm based on edge classification

    Science.gov (United States)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  2. A Parsing Graph-based Algorithm for Ontology Mapping

    Institute of Scientific and Technical Information of China (English)

    WANG Zong-jiang; WANG Ying-lin; ZHANG Shen-sheng; DU Tao

    2009-01-01

    Ontology mapping is a critical problem for integrating the heterogeneous information sources. It can identify the elements corresponding to each other. At present, there are many ontology mapping algorithms, but most of them are bused on database schema. After analyzing the similarity and difference of ontology and schema, wepropose a parsing graph-based algorithm for ontology mapping. The ontology parsing graph (OP-graph) extends the general concept of graph, encodes logic relationship, and semantic information which the ontology contains into vertices and edges of the graph. Thus, the problem of ontology mapping is translated into a problem of finding the optimal match between the two OP-graphs. With the definition of a universal measure for comparing the entities of two ontoingies, we calculate the whole similarity between the two OP-graphs iteratively, until the optimal match is found. The results of experiments show that our algorithm is promising.

  3. Neighborhood based Levenberg-Marquardt algorithm for neural network training.

    Science.gov (United States)

    Lera, G; Pinzolas, M

    2002-01-01

    Although the Levenberg-Marquardt (LM) algorithm has been extensively applied as a neural-network training method, it suffers from being very expensive, both in memory and number of operations required, when the network to be trained has a significant number of adaptive weights. In this paper, the behavior of a recently proposed variation of this algorithm is studied. This new method is based on the application of the concept of neural neighborhoods to the LM algorithm. It is shown that, by performing an LM step on a single neighborhood at each training iteration, not only significant savings in memory occupation and computing effort are obtained, but also, the overall performance of the LM method can be increased.

  4. PSO Algorithm Based on Accumulation Effect and Mutation

    Directory of Open Access Journals (Sweden)

    Ji Wei Dong

    2013-07-01

    Full Text Available Particle SwarmOptimization (PSO algorithm is a new swarm intelligence optimization   technique, because of its simplicity, fewerparameters and good effects, PSO has been widely used to solve various complexoptimization problems. particle swarm optimization(PSO exist the problems ofpremature and local convergence, we proposed an improved particle swarmoptimization based on aggregation effect and with mutation operator, whichdetermines whether the aggregation occurs in searching, if there is then theGaussian  mutation is detected to theglobal extremum , to overcome particle swarm optimization falling into localoptimal solution defects.  Testing thenew algorithm by a typical test function, the results show that , compared withthe conventional genetic algorithm(SGA, it improves the ability of globaloptimization, but also effectively avoid the premature convergence.

  5. Applied RCM2 Algorithms Based on Statistical Methods

    Institute of Scientific and Technical Information of China (English)

    Fausto Pedro García Márquez; Diego J. Pedregal

    2007-01-01

    The main purpose of this paper is to implement a system capable of detecting faults in railway point mechanisms. This is achieved by developing an algorithm that takes advantage of three empirical criteria simultaneously capable of detecting faults from records of measurements of force against time. The system is dynamic in several respects: the base reference data is computed using all the curves free from faults as they are encountered in the experimental data; the algorithm that uses the three criteria simultaneously may be applied in on-line situations as each new data point becomes available; and recursive algorithms are applied to filter noise from the raw data in an automatic way. Encouraging results are found in practice when the system is applied to a number of experiments carried out by an industrial sponsor.

  6. Chaos-Based Image Encryption Algorithm Using Decomposition

    Directory of Open Access Journals (Sweden)

    Xiuli Song

    2013-07-01

    Full Text Available The proposed chaos-based image encryption algorithm consists of four stages: decomposition, shuffle, diffusion and combination. Decomposition is that an original image is decomposed to components according to some rule. The purpose of the shuffle is to mask original organization of the pixels of the image, and the diffusion is to change their values. Combination is not necessary in the sender. To improve the efficiency, the parallel architecture is taken to process the shuffle and diffusion. To enhance the security of the algorithm, firstly, a permutation of the labels is designed. Secondly, two Logistic maps are used in diffusion stage to encrypt the components. One map encrypts the odd rows of the component and another map encrypts the even rows. Experiment results and security analysis demonstrate that the encryption algorithm not only is robust and flexible, but also can withstand common attacks such as statistical attacks and differential attacks.

  7. Chaos-Based Encryption Algorithm for Compressed Video

    Institute of Scientific and Technical Information of China (English)

    袁春; 钟玉琢; 贺玉文

    2003-01-01

    Encryption for compressed video streams has attracted increasing attention with the exponential growth of digital multimedia delivery and consumption. However, most algorithms proposed in the literature do not effectively address the peculiarities of security and performance requirements. This paper presents a chaos-based encryption algorithm called the chaotic selective encryption of compressed video (CSECV) which exploits the characteristics of the compressed video. The encryption has three separate layers that can be selected according to the security needs of the application and the processing capability of the client computer. The chaotic pseudo-random sequence generator used to generate the key-sequence to randomize the important fields in the compressed video stream has its parameters encrypted by an asymmetric cipher and placed into the stream. The resulting stream is still a valid video stream. CSECV has significant advantages over existing algorithms for security, decryption speed, implementation flexibility, and error preservation.

  8. Design of synthetic biological logic circuits based on evolutionary algorithm.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.

  9. An Algorithm of Sensor Management Based on Dynamic Target Detection

    Institute of Scientific and Technical Information of China (English)

    LIUXianxing; ZHOULin; JINYong

    2005-01-01

    The probability density of stationary target is only evolved at measurement update, but the probability density of dynamic target is evolved not only at measurement update but also during measurements, this paper researches an algorithm of dynamic targets detection. Firstly, it presents the evolution of probability density at measurement update by Bayes' rule and the evolution of probability density during measurements by Fokker-Planck differential equations, respectively. Secondly, the method of obtaining information entropy by the probability density is given and sensor resources are distributed based on the evolution of information entropy viz. the maximization of information gain. Simulation results show that compared with the algorithm of serial search, this algorithm is feasible and effective when it is used to detect dynamic target.

  10. Method of stereo matching based on genetic algorithm

    Science.gov (United States)

    Lu, Chaohui; An, Ping; Zhang, Zhaoyang

    2003-09-01

    A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.

  11. Community Clustering Algorithm in Complex Networks Based on Microcommunity Fusion

    Directory of Open Access Journals (Sweden)

    Jin Qi

    2015-01-01

    Full Text Available With the further research on physical meaning and digital features of the community structure in complex networks in recent years, the improvement of effectiveness and efficiency of the community mining algorithms in complex networks has become an important subject in this area. This paper puts forward a concept of the microcommunity and gets final mining results of communities through fusing different microcommunities. This paper starts with the basic definition of the network community and applies Expansion to the microcommunity clustering which provides prerequisites for the microcommunity fusion. The proposed algorithm is more efficient and has higher solution quality compared with other similar algorithms through the analysis of test results based on network data set.

  12. Impulsive Neural Networks Algorithm Based on the Artificial Genome Model

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-05-01

    Full Text Available To describe gene regulatory networks, this article takes the framework of the artificial genome model and proposes impulsive neural networks algorithm based on the artificial genome model. Firstly, the gene expression and the cell division tree are applied to generate spiking neurons with specific attributes, neural network structure, connection weights and specific learning rules of each neuron. Next, the gene segment duplications and divergence model are applied to design the evolutionary algorithm of impulsive neural networks at the level of the artificial genome. The dynamic changes of developmental gene regulatory networks are controlled during the whole evolutionary process. Finally, the behavior of collecting food for autonomous intelligent agent is simulated, which is driven by nerves. Experimental results demonstrate that the algorithm in this article has the evolutionary ability on large-scale impulsive neural networks

  13. Crime Busting Model Based on Dynamic Ranking Algorithms

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2013-01-01

    Full Text Available This paper proposed a crime busting model with two dynamic ranking algorithms to detect the likelihood of a suspect and the possibility of a leader in a complex social network. Signally, in order to obtain the priority list of suspects, an advanced network mining approach with a dynamic cumulative nominating algorithm is adopted to rapidly reduce computational expensiveness than most other topology-based approaches. Our method can also greatly increase the accuracy of solution with the enhancement of semantic learning filtering at the same time. Moreover, another dynamic algorithm of node contraction is also presented to help identify the leader among conspirators. Test results are given to verify the theoretical results, which show the great performance for either small or large datasets.

  14. An improved EZBC algorithm based on block bit length

    Science.gov (United States)

    Wang, Renlong; Ruan, Shuangchen; Liu, Chengxiang; Wang, Wenda; Zhang, Li

    2011-12-01

    Embedded ZeroBlock Coding and context modeling (EZBC) algorithm has high compression performance. However, it consumes large amounts of memory space because an Amplitude Quadtree of wavelet coefficients and other two link lists would be built during the encoding process. This is one of the big challenges for EZBC to be used in real time or hardware applications. An improved EZBC algorithm based on bit length of coefficients was brought forward in this article. It uses Bit Length Quadtree to complete the coding process and output the context for Arithmetic Coder. It can achieve the same compression performance as EZBC and save more than 75% memory space required in the encoding process. As Bit Length Quadtree can quickly locate the wavelet coefficients and judge their significance, the improved algorithm can dramatically accelerate the encoding speed. These improvements are also beneficial for hardware. PACS: 42.30.Va, 42.30.Wb

  15. Pitch Based Wind Turbine Intelligent Speed Setpoint Adjustment Algorithms

    Directory of Open Access Journals (Sweden)

    Asier González-González

    2014-06-01

    Full Text Available This work is aimed at optimizing the wind turbine rotor speed setpoint algorithm. Several intelligent adjustment strategies have been investigated in order to improve a reward function that takes into account the power captured from the wind and the turbine speed error. After different approaches including Reinforcement Learning, the best results were obtained using a Particle Swarm Optimization (PSO-based wind turbine speed setpoint algorithm. A reward improvement of up to 10.67% has been achieved using PSO compared to a constant approach and 0.48% compared to a conventional approach. We conclude that the pitch angle is the most adequate input variable for the turbine speed setpoint algorithm compared to others such as rotor speed, or rotor angular acceleration.

  16. CS-based fast ultrasound imaging with improved FISTA algorithm

    Science.gov (United States)

    Lin, Jie; He, Yugao; Shi, Guangming; Han, Tingyu

    2015-08-01

    In ultrasound imaging system, the wave emission and data acquisition is time consuming, which can be solved by adopting the plane wave as the transmitted signal, and the compressed sensing (CS) theory for data acquisition and image reconstruction. To overcome the very high computation complexity caused by introducing CS into ultrasound imaging, in this paper, we propose an improvement of the fast iterative shrinkage-thresholding algorithm (FISTA) to achieve the fast reconstruction of the ultrasound imaging, in which a modified setting is done with the parameter of step size for each iteration. Further, the GPU strategy is designed for the proposed algorithm, to guarantee the real time implementation of imaging. The simulation results show that the GPU-based image reconstruction algorithm can achieve the fast ultrasound imaging without damaging the quality of image.

  17. A dynamic fuzzy clustering method based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei

    2003-01-01

    A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.

  18. A Reversible Image Steganographic Algorithm Based on Slantlet Transform

    Directory of Open Access Journals (Sweden)

    Sushil Kumar

    2013-07-01

    Full Text Available In this paper we present a reversible imagesteganography technique based on Slantlet transform (SLTand using advanced encryption standard (AES method. Theproposed method first encodes the message using two sourcecodes, viz., Huffman codes and a self-synchronizing variablelength code known as, T-code. Next, the encoded binarystring is encrypted using an improved AES method. Theencrypted data so obtained is embedded in the middle andhigh frequency sub-bands, obtained by applying 2-level ofSLT to the cover-image, using thresholding method. Theproposed algorithm is compared with the existing techniquesbased on wavelet transform. The Experimental results showthat the proposed algorithm can extract hidden message andrecover the original cover image with low distortion. Theproposed algorithm offers acceptable imperceptibility,security (two-layer security and provides robustness againstGaussian and Salt-n-Pepper noise attack.

  19. Ant Colony Based Path Planning Algorithm for Autonomous Robotic Vehicles

    Directory of Open Access Journals (Sweden)

    Yogita Gigras

    2012-11-01

    Full Text Available The requirement of an autonomous robotic vehicles demand highly efficient algorithm as well as software. Today’s advanced computer hardware technology does not provide these types of extensive processing capabilities, so there is still a major space and time limitation for the technologies that are available for autonomous robotic applications. Now days, small to miniature mobile robots are required for investigation, surveillance and hazardous material detection for military and industrial applications. But these small sized robots have limited power capacity as well as memory and processing resources. A number of algorithms exist for producing optimal path for dynamically cost. This paper presents a new ant colony based approach which is helpful in solving path planning problem for autonomous robotic application. The experiment of simulation verified its validity of algorithm in terms of time.

  20. An algorithm for motif-based network design

    CERN Document Server

    Mäki-Marttunen, Tuomo

    2016-01-01

    A determinant property of the structure of a biological network is the distribution of local connectivity patterns, i.e., network motifs. In this work, a method for creating directed, unweighted networks while promoting a certain combination of motifs is presented. This motif-based network algorithm starts with an empty graph and randomly connects the nodes by advancing or discouraging the formation of chosen motifs. The in- or out-degree distribution of the generated networks can be explicitly chosen. The algorithm is shown to perform well in producing networks with high occurrences of the targeted motifs, both ones consisting of 3 nodes as well as ones consisting of 4 nodes. Moreover, the algorithm can also be tuned to bring about global network characteristics found in many natural networks, such as small-worldness and modularity.

  1. A Dither Modulation Audio Watermarking Algorithm Based on HAS

    Directory of Open Access Journals (Sweden)

    Yi-bo Huang

    2012-11-01

    Full Text Available In this study, we propose a dither modulation audio watermarking algorithm based on human auditory system which applied the theory of dither modulation. The algorithm made the two-value image watermarking to one-dimensional digital sequence firstly and used the Fibonacci to transform one-dimensional digital sequence. Then divide the audio into audio data segment and made discrete wavelet transform with audio data segment, every segment can adaptive choose quantization step. Finally put low frequency coefficients transformed embedding the watermarking which applied the dither modulation. When extract the watermark with no original audio, they realized blind extraction. The experimental results show that this algorithm has preferable robustness to against the attack from noise addition, compression, low pass filtering and re-sampling.

  2. Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio

    CERN Document Server

    Zeng, Yonghong

    2008-01-01

    Spectrum sensing is a fundamental component is a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based ...

  3. Quantum-based algorithm for optimizing artificial neural networks.

    Science.gov (United States)

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.

  4. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Directory of Open Access Journals (Sweden)

    Xiao Sun

    2015-01-01

    Full Text Available Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  5. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  6. A Collaborative Neighbor Representation Based Face Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Zhengming Li

    2013-01-01

    Full Text Available We propose a new collaborative neighbor representation algorithm for face recognition based on a revised regularized reconstruction error (RRRE, called the two-phase collaborative neighbor representation algorithm (TCNR. Specifically, the RRRE is the division of  l2-norm of reconstruction error of each class into a linear combination of  l2-norm of reconstruction coefficients of each class, which can be used to increase the discrimination information for classification. The algorithm is as follows: in the first phase, the test sample is represented as a linear combination of all the training samples by incorporating the neighbor information into the objective function. In the second phase, we use the k classes to represent the test sample and calculate the collaborative neighbor representation coefficients. TCNR not only can preserve locality and similarity information of sparse coding but also can eliminate the side effect on the classification decision of the class that is far from the test sample. Moreover, the rationale and alternative scheme of TCNR are given. The experimental results show that TCNR algorithm achieves better performance than seven previous algorithms.

  7. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  8. Object tracking algorithm based on contextual visual saliency

    Science.gov (United States)

    Fu, Bao; Peng, XianRong

    2016-09-01

    As to object tracking, the local context surrounding of the target could provide much effective information for getting a robust tracker. The spatial-temporal context (STC) learning algorithm proposed recently considers the information of the dense context around the target and has achieved a better performance. However STC only used image intensity as the object appearance model. But this appearance model not enough to deal with complicated tracking scenarios. In this paper, we propose a novel object appearance model learning algorithm. Our approach formulates the spatial-temporal relationships between the object of interest and its local context based on a Bayesian framework, which models the statistical correlation between high-level features (Circular-Multi-Block Local Binary Pattern) from the target and its surrounding regions. The tracking problem is posed by computing a visual saliency map, and obtaining the best target location by maximizing an object location likelihood function. Extensive experimental results on public benchmark databases show that our algorithm outperforms the original STC algorithm and other state-of-the-art tracking algorithms.

  9. FACT. New image parameters based on the watershed-algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Linhoff, Lena; Bruegge, Kai Arno; Buss, Jens [TU Dortmund (Germany). Experimentelle Physik 5b; Collaboration: FACT-Collaboration

    2016-07-01

    FACT, the First G-APD Cherenkov Telescope, is the first imaging atmospheric Cherenkov telescope that is using Geiger-mode avalanche photodiodes (G-APDs) as photo sensors. The raw data produced by this telescope are processed in an analysis chain, which leads to a classification of the primary particle that induce a shower and to an estimation of its energy. One important step in this analysis chain is the parameter extraction from shower images. By the application of a watershed algorithm to the camera image, new parameters are computed. Perceiving the brightness of a pixel as height, a set of pixels can be seen as 'landscape' with hills and valleys. A watershed algorithm groups all pixels to a cluster that belongs to the same hill. From the emerging segmented image, one can find new parameters for later analysis steps, e.g. number of clusters, their shape and containing photon charge. For FACT data, the FellWalker algorithm was chosen from the class of watershed algorithms, because it was designed to work on discrete distributions, in this case the pixels of a camera image. The FellWalker algorithm is implemented in FACT-tools, which provides the low level analysis framework for FACT. This talk will focus on the computation of new, FellWalker based, image parameters, which can be used for the gamma-hadron separation. Additionally, their distributions concerning real and Monte Carlo Data are compared.

  10. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  11. Wideband Array Signal Detection Algorithm Based on Power Focusing

    Directory of Open Access Journals (Sweden)

    Gong Bin

    2012-09-01

    Full Text Available Aiming at the requirement of real-time signal detection in the passive surveillance system, a wideband array signal detection algorithm is proposed based on the concept of power focusing. By making use of the phase difference of the signal received by a uniform linear array, the algorithm makes the power of the received signal focused in the Direction Of Arrival (DOA with improved cascade FFT. Subsequently, the probability density function of the output noise at each angle is derived. Furthermore, a Constant False Alarm Rate (CFAR test statistic and the corresponding detection threshold are constructed. The theoretical probability of detection is also derived for different false alarm rate and Signal-to-Noise Ratio (SNR. The proposed algorithm is computationally efficient, and the detection process is independent of the prior information. Meanwhile, the results can act as the initial value for other algorithms with higher precision. Simulation results show that the proposed algorithm achieves good performance for weak signal detection.

  12. A Greedy Algorithm for Neighborhood Overlap-Based Community Detection

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2016-01-01

    Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.

  13. User Equilibrium Exchange Allocation Algorithm Based on Super Network

    Directory of Open Access Journals (Sweden)

    Peiyi Dong

    2013-12-01

    Full Text Available The theory of super network is an effective method to various traffic networks with means of multiple decision-making. It provides us with a favorable pricing decision tool for it combines a practical transport network with the space pricing decision. Spatial price equilibrium problem has always been the important research direction of the Transport Economics and regional transportation planning. As to how to combine the two, this paper presents the user equilibrium exchange allocation algorithm based on super network, which successfully keep the classical spatial price equilibrium problems (SPE into a super-network analysis framework. Through super-network analysis, we can add two virtual nodes in the network, which correspond to the virtual supply node and the super-super-demand virtual node, analysis the user equivalence with the SPE equilibrium and find the concrete steps of users exchange allocation algorithm based on super-network equilibrium. Finally, we carried out experiments to verify. The experiments show that: through the user equilibrium exchange SPE allocation algorithm based on super-network, we can get the steady-state equilibrium solution, which demonstrate that the algorithm is reasonable.

  14. A CUDA-based reverse gridding algorithm for MR reconstruction.

    Science.gov (United States)

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA.

  15. Workplace Based Assessment in Psychiatry

    Directory of Open Access Journals (Sweden)

    Ayse Devrim Basterzi

    2009-11-01

    Full Text Available Workplace based assessment refers to the assessment of working practices based on what doctors actually do in the workplace, and is predominantly carried out in the workplace itself. Assessment drives learning and it is therefore essential that workplace-based assessment focuses on important attributes rather than what is easiest to assess. Workplacebased assessment is usually competency based. Workplace based assesments may well facilitate and enhance various aspects of educational supervisions, including its structure, frequency and duration etc. The structure and content of workplace based assesments should be monitored to ensure that its benefits are maximised by remaining tailored to individual trainees' needs. Workplace based assesment should be used for formative and summative assessments. Several formative assessment methods have been developed for use in the workplace such as mini clinical evaluation exercise (mini-cex, evidence based journal club assesment and case based discussion, multi source feedback etc. This review discusses the need of workplace based assesments in psychiatry graduate education and introduces some of the work place based assesment methods.

  16. Performance Assessment of Hybrid Data Fusion and Tracking Algorithms

    DEFF Research Database (Denmark)

    Sand, Stephan; Mensing, Christian; Laaraiedh, Mohamed

    2009-01-01

    This paper presents an overview on the performance of hybrid data fusion and tracking algorithms evaluated in the WHERE consortium. The focus is on three scenarios. For the small scale indoor scenario with ultra wideband (UWB) complementing cellular communication systems, the accuracy can vary in...

  17. Assessing the Incremental Algorithm: A Response to Krahmer et al.

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    This response discusses the experiment reported in Krahmer et al.'s Letter to the Editor of "Cognitive Science". We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from "normal" (i.e.,…

  18. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    Science.gov (United States)

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  19. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    Science.gov (United States)

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  20. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation.

    Science.gov (United States)

    Diaz, Estefania Munoz; Caamano, Maria; Sánchez, Francisco Javier Fuentes

    2017-07-03

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios.

  1. Effective pathfinding for four-wheeled robot based on combining Theta* and hybrid A* algorithms

    Directory of Open Access Journals (Sweden)

    Віталій Геннадійович Михалько

    2016-07-01

    Full Text Available Effective pathfinding algorithm based on Theta* and Hybrid A* algorithms was developed for four-wheeled robot. Pseudocode for algorithm was showed and explained. Algorithm and simulator for four-wheeled robot were implemented using Java programming language. Algorithm was tested on U-obstacles, complex maps and for parking problem

  2. A Multiple-Neighborhood-Based Parallel Composite Local Search Algorithm for Timetable Problem

    Institute of Scientific and Technical Information of China (English)

    颜鹤; 郁松年

    2004-01-01

    This paper presents a parallel composite local search algorithm based on multiple search neighborhoods to solve a special kind of timetable problem. The new algorithm can also effectively solve those problems that can be solved by general local search algorithms. Experimental results show that the new algorithm can generate better solutions than general local search algorithms.

  3. New Iterative Learning Control Algorithms Based on Vector Plots Analysis1)

    Institute of Scientific and Technical Information of China (English)

    XIESheng-Li; TIANSen-Ping; XIEZhen-Dong

    2004-01-01

    Based on vector plots analysis, this paper researches the geometric frame of iterativelearning control method. New structure of iterative learning algorithms is obtained by analyzingthe vector plots of some general algorithms. The structure of the new algorithm is different fromthose of the present algorithms. It is of faster convergence speed and higher accuracy. Simulationspresented here illustrate the effectiveness and advantage of the new algorithm.

  4. Measurement Theory in Deutsch's Algorithm Based on the Truth Values

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao

    2016-08-01

    We propose a new measurement theory, in qubits handling, based on the truth values, i.e., the truth T (1) for true and the falsity F (0) for false. The results of measurement are either 0 or 1. To implement Deutsch's algorithm, we need both observability and controllability of a quantum state. The new measurement theory can satisfy these two. Especially, we systematically describe our assertion based on more mathematical analysis using raw data in a thoughtful experiment.

  5. Immune and Genetic Algorithm Based Assembly Sequence Planning

    Institute of Scientific and Technical Information of China (English)

    YANG Jian-guo; LI Bei-zhi; YU Lei; JIN Yu-song

    2004-01-01

    In this paper an assembly sequence planning model inspired by natural immune and genetic algorithm (ASPIG) based on the part degrees of freedom matrix (PDFM) is proposed, and a proto system - DSFAS based on the ASPIG is introduced to solve assembly sequence problem. The concept and generation of PDFM and DSFAS are also discussed. DSFAS can prevent premature convergence, and promote population diversity, and can accelerate the learning and convergence speed in behavior evolution problem.

  6. TOA-BASED ROBUST LOCATION ALGORITHMS FOR WIRELESS CELLULAR NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Sun Guolin; Guo Wei

    2005-01-01

    Caused by Non-Line-Of-Sight (NLOS) propagation effect, the non-symmetric contamination of measured Time Of Arrival (TOA) data leads to high inaccuracies of the conventional TOA based mobile location techniques. Robust position estimation method based on bootstrapping M-estimation and Huber estimator are proposed to mitigate the effects of NLOS propagation on the location error. Simulation results show the improvement over traditional Least-Square (LS)algorithm on location accuracy under different channel environments.

  7. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    OpenAIRE

    Hiroki Yoshioka; Kenta Obata

    2011-01-01

    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...

  8. Performance assessment of different pulse reconstruction algorithms for the ATHENA X-ray Integral Field Unit

    Science.gov (United States)

    Peille, Philippe; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; den Hartog, Roland; de Plaa, Jelle; Barret, Didier; den Herder, Jan-Willem; Piro, Luigi; Barcons, Xavier; Pointecouteau, Etienne

    2016-07-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  9. An image based auto-focusing algorithm for digital fundus photography.

    Science.gov (United States)

    Moscaritolo, Michele; Jampel, Henry; Knezevich, Frederick; Zeimer, Ran

    2009-11-01

    In fundus photography, the task of fine focusing the image is demanding and lack of focus is quite often the cause of suboptimal photographs. The introduction of digital cameras has provided an opportunity to automate the task of focusing. We have developed a software algorithm capable of identifying best focus. The auto-focus (AF) method is based on an algorithm we developed to assess the sharpness of an image. The AF algorithm was tested in the prototype of a semi-automated nonmydriatic fundus camera designed to screen in the primary care environment for major eye diseases. A series of images was acquired in volunteers while focusing the camera on the fundus. The image with the best focus was determined by the AF algorithm and compared to the assessment of two masked readers. A set of fundus images was obtained in 26 eyes of 20 normal subjects and 42 eyes of 28 glaucoma patients. The 95% limits of agreement between the readers and the AF algorithm were -2.56 to 2.93 and -3.7 to 3.84 diopter and the bias was 0.09 and 0.71 diopter, for the two readers respectively. On average, the readers agreed with the AF algorithm on the best correction within less than 3/4 diopter. The intraobserver repeatability was 0.94 and 1.87 diopter, for the two readers respectively, indicating that the limit of agreement with the AF algorithm was determined predominantly by the repeatability of each reader. An auto-focus algorithm for digital fundus photography can identify the best focus reliably and objectively. It may improve the quality of fundus images by easing the task of the photographer.

  10. Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm

    Science.gov (United States)

    Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng

    2017-07-01

    A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.

  11. Assessment of a pesticide exposure intensity algorithm in the agricultural health study.

    Science.gov (United States)

    Thomas, Kent W; Dosemeci, Mustafa; Coble, Joseph B; Hoppin, Jane A; Sheldon, Linda S; Chapa, Guadalupe; Croghan, Carry W; Jones, Paul A; Knott, Charles E; Lynch, Charles F; Sandler, Dale P; Blair, Aaron E; Alavanja, Michael C

    2010-09-01

    The accuracy of the exposure assessment is a critical factor in epidemiological investigations of pesticide exposures and health in agricultural populations. However, few studies have been conducted to evaluate questionnaire-based exposure metrics. The Agricultural Health Study (AHS) is a prospective cohort study of pesticide applicators who provided detailed questionnaire information on their use of specific pesticides. A field study was conducted for a subset of the applicators enrolled in the AHS to assess a pesticide exposure algorithm through comparison of algorithm intensity scores with measured exposures. Pre- and post-application urinary biomarker measurements were made for 2,4-D (n=69) and chlorpyrifos (n=17) applicators. Dermal patch, hand wipe, and personal air samples were also collected. Intensity scores were calculated using information from technician observations and an interviewer-administered questionnaire. Correlations between observer and questionnaire intensity scores were high (Spearman's r=0.92 and 0.84 for 2,4-D and chlorpyrifos, respectively). Intensity scores from questionnaires for individual applications were significantly correlated with post-application urinary concentrations for both 2,4-D (r=0.42, P<0.001) and chlorpyrifos (r=0.53, P=0.035) applicators. Significant correlations were also found between intensity scores and estimated hand loading, estimated body loading, and air concentrations for 2,4-D applicators (r-values 0.28-0.50, P-values<0.025). Correlations between intensity scores and dermal and air measures were generally lower for chlorpyrifos applicators using granular products. A linear regression model indicated that the algorithm factors for individual applications explained 24% of the variability in post-application urinary 2,4-D concentration, which increased to 60% when the pre-application urine concentration was included. The results of the measurements support the use of the algorithm for estimating questionnaire-based

  12. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  13. A self region based real-valued negative selection algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng-bin; WANG Da-wei; WANG Sheng-wen

    2008-01-01

    Point-wise negative selection algorithms, which generate their detector sets based on point of self da-ta, have lower training efficiency and detection rate. To solve this problem, a self region based real-valued neg-ative selection algorithm is presented. In this new approach, the continuous self region is defined by the collec-tion of self data, the partial training takes place at the training stage according to both the radius of self region and the cosine distance between gravity of the self region and detector candidate, and variable detectors in the self region are deployed. The algorithm is tested using the triangle shape of self region in the 2-D complement space and KDD CUP 1999 data set. Results show that, more information can be provided when the training self points are used together as a whole, and compared with the point-wise negative selection algorithm, the new ap-proach can improve the training efficiency of system and the detection rate significantly.

  14. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  15. Family genetic algorithms based on gene exchange and its application

    Institute of Scientific and Technical Information of China (English)

    Li Jianhua; Ding Xiangqian; Wang Sunan; Yu Qing

    2006-01-01

    Genetic Algorithms (GA) are a search techniques based on mechanics of nature selection and have already been successfully applied in many diverse areas. However, increasing samples show that GA's performance is not as good as it was expected to be. Criticism of this algorithm includes the slow speed and premature result during convergence procedure. In order to improve the performance, the population size and individuals' space is emphatically described. The influence of individuals' space and population size on the operators is analyzed. And a novel family genetic algorithm (FGA) is put forward based on this analysis. In this novel algorithm, the optimum solution families closed to quality individuals is constructed, which is exchanged found by a search in the world space. Search will be done in this microspace. The family that can search better genes in a limited period of time would win a new life. At the same time, the best gene of this micro space with the basic population in the world space is exchanged. Finally, the FGA is applied to the function optimization and image matching through several experiments. The results show that the FGA possessed high performance.

  16. Application of neural based estimation algorithm for gait phases of above knee prosthesis.

    Science.gov (United States)

    Tileylioğlu, E; Yilmaz, A

    2015-01-01

    In this study, two gait phase estimation methods which utilize a rule based quantization and an artificial neural network model respectively are developed and applied for the microcontroller based semi-active knee prosthesis in order to respond user demands and adapt environmental conditions. In this context, an experimental environment in which gait data collected synchronously from both inertial and image based measurement systems has been set up. The inertial measurement system that incorporates MEM accelerometers and gyroscopes is used to perform direct motion measurement through the microcontroller, while the image based measurement system is employed for producing the verification data and assessing the success of the prosthesis. Embedded algorithms dynamically normalize the input data prior to gait phase estimation. The real time analyses of two methods revealed that embedded ANN based approach performs slightly better in comparison with the rule based algorithm and has advantage of being easily-scalable, thus able to accommodate additional input parameters considering the microcontroller constraints.

  17. aTrunk—An ALS-Based Trunk Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Sebastian Lamprecht

    2015-08-01

    Full Text Available This paper presents a rapid multi-return ALS-based (Airborne Laser Scanning tree trunk detection approach. The multi-core Divide & Conquer algorithm uses a CBH (Crown Base Height estimation and 3D-clustering approach to isolate points associated with single trunks. For each trunk, a principal-component-based linear model is fitted, while a deterministic modification of LO-RANSAC is used to identify an optimal model. The algorithm returns a vector-based model for each identified trunk while parameters like the ground position, zenith orientation, azimuth orientation and length of the trunk are provided. The algorithm performed well for a study area of 109 trees (about 2/3 Norway Spruce and 1/3 European Beech, with a point density of 7.6 points per m2, while a detection rate of about 75% and an overall accuracy of 84% were reached. Compared to crown-based tree detection methods, the aTrunk approach has the advantages of a high reliability (5% commission error and its high tree positioning accuracy (0.59m average difference and 0.78m RMSE. The usage of overlapping segments with parametrizable size allows a seamless detection of the tree trunks.

  18. Fully automated algorithm for wound surface area assessment.

    Science.gov (United States)

    Deana, Alessandro Melo; de Jesus, Sérgio Henrique Costa; Sampaio, Brunna Pileggi Azevedo; Oliveira, Marcelo Tavares; Silva, Daniela Fátima Teixeira; França, Cristiane Miranda

    2013-01-01

    Worldwide, clinicians, dentists, nurses, researchers, and other health professionals need to monitor the wound healing progress and to quantify the rate of wound closure. The aim of this study is to demonstrate, step by step, a fully automated numerical method to estimate the size of the wound and the percentage damaged relative to the body surface area (BSA) in images, without the requirement for human intervention. We included the formula for BSA in rats in the algorithm. The methodology was validated in experimental wounds and human ulcers and was compared with the analysis of an experienced pathologist, with good agreement. Therefore, this algorithm is suitable for experimental wounds and burns and human ulcers, as they have a high contrast with adjacent normal skin.

  19. A Survey Paper on Deduplication by Using Genetic Algorithm Alongwith Hash-Based Algorithm

    Directory of Open Access Journals (Sweden)

    Miss. J. R. Waykole

    2014-01-01

    Full Text Available In today‟s world, by increasing the volume of information available in digital libraries, most of the system may be affected by the existence of replicas in their warehouses. This is due to the fact that, clean and replica-free warehouse not only allow the retrieval of information which is of higher quality but also lead to more concise data and reduces computational time and resources to process this data. Here, we propose a genetic programming approach along with hash-based similarity i.e, with MD5 and SHA-1 algorithm. This approach removes the replicas data and finds the optimization solution to deduplication of records.

  20. Staff line detection and revision algorithm based on subsection projection and correlation algorithm

    Science.gov (United States)

    Yang, Yin-xian; Yang, Ding-li

    2013-03-01

    Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.